WO2014092360A1 - Method for evaluating patents based on complex factors - Google Patents

Method for evaluating patents based on complex factors Download PDF

Info

Publication number
WO2014092360A1
WO2014092360A1 PCT/KR2013/010950 KR2013010950W WO2014092360A1 WO 2014092360 A1 WO2014092360 A1 WO 2014092360A1 KR 2013010950 W KR2013010950 W KR 2013010950W WO 2014092360 A1 WO2014092360 A1 WO 2014092360A1
Authority
WO
WIPO (PCT)
Prior art keywords
evaluation
factors
independent
engine
factor
Prior art date
Application number
PCT/KR2013/010950
Other languages
French (fr)
Inventor
Jung Ae Kwak
Kyeong Seon CHO
In Jae Park
Seung Taek Oh
Un Young Cho
Sang Geun Yu
Original Assignee
Kipa.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from KR1020120144315A external-priority patent/KR101456188B1/en
Priority claimed from KR1020120144325A external-priority patent/KR101456187B1/en
Application filed by Kipa. filed Critical Kipa.
Publication of WO2014092360A1 publication Critical patent/WO2014092360A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/18Legal services; Handling legal documents
    • G06Q50/184Intellectual property management

Definitions

  • This disclosure relates to a method for evaluating patents.
  • IP intellectual property
  • IP owners evaluate their IP rights on their own or entrust profit/non-profit organizations to conduct IP evaluation.
  • results of evaluation of patents may be utilized for various purposes, such as maintenance of patents, offer of strategies for utilizing patents, support for research planning, estimation of patents in light of right, economy, and environment, invention evaluation, grasp of a critical invention and priority, association with business strategies (strategic alliance), allocation of R&D planning resources, technology evaluation for a loan from a financial organization, evaluation for choosing a provider (subject) of a government direct/indirect technical development support business, evaluation for intangible assets, evaluating customers’ intangible assets into current values based on clear and objective materials in consideration of technical, economical, and social aspects, compensation for inventors, asset evaluation (for depreciation), evaluation for IPs for the purpose of technology trade (technology transfer, M&A, etc.), evaluation of IP rights for loaning a technology, or attraction of investment.
  • Fig. 1 is a view illustrating the necessity of introducing a patent evaluation system.
  • an embodiment of this disclosure aims to suggest a system for evaluating a patent. Further, an embodiment of this disclosure aims to suggest a complex factors algorithm for some vague standards used when a patent is automatically evaluated by a system. Further, another embodiment of this disclosure aims to suggest a complex factors algorithm for vague standards used when evaluating possibility of invalidation (or patent stability) of a patent in automatically evaluating a patent by a system.
  • the method may comprises: receiving an evaluation request for a specific patent from a user device; and providing an evaluation result, which is yielded for the specific patent using an evaluation engine or an artificially intelligent evaluation-bot (or evaluation agent), to the user device.
  • the evaluation engine may include an algorithm in which two or more of the total number of claims, the number of independent claims, and the number of dependent claims are organically systematized into the complex factor.
  • the evaluation engine may yield the evaluation result using the complex factors of the algorithm.
  • the yielding the evaluation result using the complex factors of the algorithm may include: determining whether the number of independent claims with respect to the specific patent is equal to or higher than a predetermined number; determining whether there exists a family patent with respect to the specific patent, when the number of independent claims is less than the predetermined number; and determining whether there exist both an apparatus claim (or product claim) and a method claim in the specific patent, when the number of independent claims is the predetermined number of more,.
  • the yielding the evaluation result using the complex factors of the algorithm may include: determining whether the total number of claims with respect to the specific patent is higher than a predetermined number.
  • the yielding the evaluation result using the complex factors of the algorithm may include: determining whether the number of dependent claims is higher than a predetermined number.
  • the evaluation engine may be built by performing a machine learning about expert’s (or patent technician’s) evaluation results for sample patents in view of each evaluation item.
  • the evaluation engine is built by one or more of: calculating correlations between evaluation factors and one or more predetermined evaluation items based on the expert’s (or patent technician’s) evaluation results for the sample patents; mapping the evaluation items with the evaluation factors based on the calculated correlations; and performing the machine learning about the expert’s evaluation results using the evaluation factors mapped with the evaluation items.
  • the evaluation factors may include: information extracted from one or more of Bibliographical information, prosecution history information, a specification, and patented claims; or information extracted by performing a natural language process on the specification and the patented claims.
  • the expert’s evaluation results may be performed for each technical field. So, the evaluation engine may be built for each technical field. So, the providing of the evaluation result may be performed using an evaluation engine of a technical field corresponding to a technical field of the specific patent.
  • the building of the evaluation engine may include: calculating correlation values between the results evaluated by a plurality of experts in each technical field; and building the evaluation engine based on the expert’s evaluation result having the highest correlation value among the calculated correlation values.
  • the evaluation server may comprise: an interface unit configured to receive an evaluation request for a specific patent from a user device; and an evaluation engine unit configured to generated an evaluation result for the specific patent.
  • the interface unit provides the generated evaluation result to the user device.
  • the evaluation engine unit may include an algorithm in which two or more of the total number of claims, the number of independent claims, and the number of dependent claims are organically systematized into the complex factor. The evaluation engine yields the evaluation result using the complex factors of the algorithm.
  • the method may include: receiving an evaluation request for a specific patent case from a user device; and providing an evaluation result, which is yielded for the specific patent using an evaluation engine, to the user device.
  • the evaluation engine may an algorithm in which two or more of a factor about the length of an independent claim of the specific patent case, a factor about whether there is a prior technical document for the specific patent case, a factor about whether there is a family patent, and are organically systematized into the complex factor.
  • the evaluation engine may determine how much possibility of the invalidation the specific patent case has according to the complex factors.
  • the yielding the evaluation result using the complex factors of the algorithm may include: assigning reference points to each claim of the specific patent case according to the number of independent claims and the number of dependent claims; and increasing or decreasing the reference points of each claim by a predetermined value based on one or more of the length of an independent claim, a factor about whether there is a prior technical document for the specific patent case, a factor about whether there is a family patent.
  • the increasing or decreasing of the reference points of each claim may include: determining whether there is the prior technical document; and increasing or decreasing of the reference points of each claim based on the factor based on a factor whether the prior technical document has not been cited during a prosecution of the specific patent case or not.
  • a patent may be automatically evaluated by a system, and a result of the evaluation may be suggested. Further, according to an embodiment of this disclosure, an algorithm for some standards may be suggested, allowing for more quantitative, objective automatic evaluation for a patent.
  • Fig. 1 is a view illustrating the necessity of introducing a patent evaluation system
  • Fig. 2 is a view illustrating the entire architecture of a patent evaluation system according to an embodiment of the present invention
  • Fig. 3 is a view illustrating in detail one or more servers 100 as shown in Fig. 2;
  • Fig. 4 is a view illustrating in detail an example of the configuration of domestic/foreign patent evaluation servers 110 and 130 as shown in Fig. 3;
  • Fig. 5 is a flowchart illustrating an example of an algorithm for processing some evaluation items
  • Fig. 6 is a flowchart illustrating an example of another algorithm for processing some evaluation items
  • Fig. 7 is a flowchart illustrating an example of still another algorithm for processing some evaluation items
  • Fig. 8 is a flowchart illustrating an example of an algorithm for evaluating the possibility of invalidation of a patent (or patent stability) based on complex factors;
  • Fig. 9 is a flowchart illustrating an example of yet still another algorithm for evaluating the possibility of invalidation of a domestic patent (or patent stability) based on complex factors;
  • Fig. 10 is a flowchart illustrating an example of an algorithm for evaluating the possibility of invalidation of a U.S. patent (or patent stability) based on complex factors;
  • Fig. 11 is a flowchart illustrating another algorithm for evaluating the possibility of invalidation of a U.S. patent (or patent stability) based on complex factors;
  • Fig. 12 is a flowchart illustrating a method of establishing an evaluation engine or an artificially intelligent evaluation-bot (or evaluation agent) by performing a machine learning about an expert’s evaluation result according to an embodiment of the present invention
  • Fig. 13 is a flowchart illustrating a method of providing a patent evaluation service using an evaluation engine or an artificially intelligent evaluation-bot (or evaluation agent) according to an embodiment of the present invention.
  • Fig. 14 illustrates the physical configuration of evaluation servers 110 and 130 and service servers 120 and 140 according to an embodiment of the present invention.
  • the technical terms are used merely to describe predetermined embodiments and should not be construed as limited thereto. Further, as used herein, the technical terms, unless defined otherwise, should be interpreted as generally understood by those of ordinary skill in the art and should not be construed to be unduly broad or narrow. Further, when not correctly expressing the spirit of the present invention, the technical terms as used herein should be understood as ones that may be correctly understood by those of ordinary skill in the art. Further, the general terms as used herein should be interpreted as defined in the dictionary or in the context and should not be interpreted as unduly narrow.
  • first and second may be used to describe various components, but these components are not limited thereto. The terms are used only for distinguishing one component from another.
  • a first component may also be referred to as a second component, and the second component may likewise be referred to as the first component.
  • Fig. 2 is a view illustrating the entire architecture of a patent evaluation system according to an embodiment of the present invention.
  • a patent evaluation system includes one or more servers 100 and one or more databases (hereinafter, simply referred to as “DB”) 190.
  • the one or more servers 100 may be remotely managed by a managing device 500.
  • the one or more servers 100 are connected to a wired/wireless network and may provide a user device 600 with an evaluation result service and other various services. Specifically, when receiving a request for an evaluation service for a specific patent case from the user device, the one or more servers 100 may provide a result from evaluating the specific patent case.
  • Fig. 3 is a view illustrating in detail one or more servers 100 as shown in Fig. 2.
  • one or more servers 100 may include an evaluation server 100 for domestic patents (e.g., Korean patents), a service server 120 for domestic patents (e.g., Korean patents), an evaluation server 130 for foreign patents (e.g., U.S. patents), and a service server 140 for foreign patents (e.g., U.S. patents).
  • an evaluation server 100 for domestic patents e.g., Korean patents
  • a service server 120 for domestic patents e.g., Korean patents
  • an evaluation server 130 for foreign patents e.g., U.S. patents
  • a service server 140 for foreign patents e.g., U.S. patents
  • the domestic patent service server 120 and the foreign (e.g., U.S.) patent service server 140 are shown to be physically separated from each other, but these servers may be integrated into a single physical server. Further, the servers 110, 120, 130, and 140 as illustrated may be integrated into a single physical server.
  • the above-described one or more databases 190 may include patent information DBs 191 and 192, evaluation factor (or evaluation index) DBs 193 and 194, similar patent DBs 195 and 196, and evaluation result DBs 197 and 198.
  • Each DB is illustrated to be provided separately from each other for the purpose of each of evaluation of domestic patents and evaluation of foreign (e.g., U.S.) patents, and the DBs may be integrated into one.
  • the domestic (e.g., Korean) patent information DB 191 and the foreign (e.g., U.S.) patent information DB 192 may be integrated into one, and the domestic (e.g., Korean) evaluation factor (or evaluation index) DB 193 and the foreign (e.g., U.S.) evaluation factor (or evaluation index) DB 194 may be integrated into one.
  • the DBs all may be integrated into one that may be divided into fields.
  • Such DBs may be generated based on what is received from an external DB provider.
  • the server 100 may include a data collecting unit 150 that receives a domestic (e.g., Korean) or foreign (e.g., U.S.) raw DB from the external DB provider.
  • the data collecting unit 150 physically includes a network interface (NIC).
  • the data collecting unit 150 logically may be a program constituted of an API (Application Programming Interface).
  • the data collecting unit 150 processes a raw DB received from the external DB provider and may store the received raw DB in one or more DBs 190, for example, patent information DBs 191 and 192 which are connected to the server 100.
  • the domestic/foreign patent evaluation servers 110 and 130 may include one or more of specification processing units 111 and 131, natural language processing units 112 and 132, keyword processing units 113 and 133, similar patent processing units 114 and 134, evaluation factors (or evaluation indexes) processing unit 115 and 135, and evaluation engine or an artificially intelligent evaluation-bot (or evaluation agent) units 116 and 136.
  • the specification processing units 111 and 131 extract each information from the one or more DBs 190, for example, patent information DBs 191 and 192 and parse (or transform) the information.
  • the specification processing units 111 and 131 may extract one or more of a patent specification, bibliographic information, prosecution history information, claims, and drawings and may store the extracted information in each field of the evaluation factor (or evaluation index) DB.
  • the natural language processing units 112 and 132 perform a natural language process on text included in the extracted patent specification and the claims.
  • the “natural language process” refers to a computer analyzing a natural language used for, e.g., general conversation, rather than a special programming language for computers.
  • the natural language processing units 112 and 132 may conduct sentence analysis, syntax analysis, and a process of a mixed language. Further, the natural language processing units 112 and 132 may carry out a semantic process.
  • the keyword processing units 113 and 133 extract keywords from each patent based on a result of the natural language process.
  • a scheme such as a VSM (Vector Space Model) or LSA (Latent Sematic Analysis) may be used.
  • the “keyword” of a patent specification refers to word(s) that represents the subject of the patent specification, and for example, in the instant specification, the “patent evaluation” may be keywords.
  • the similar patent processing units 114 and 134 may search patents closest to each patent based on the extracted keywords and may store the results of search in the similar patent DBs 195 and 196.
  • similar patent groups are known to belong to the same sub class in the IPC (International Patent Classification), but according to an embodiment of this disclosure, similar patents may be searched from other sub classes as well as the same sub class.
  • IPC International Patent Classification
  • similar patents may be searched from other sub classes as well as the same sub class.
  • a mere increase in the number of keywords may lead to extraction of inaccurate keywords, thus resulting in completely different patents being searched from other sub classes as similar patents.
  • a proper number of keywords are extracted based on a result of simulation depending on the number of keywords and similar patents are searched with the extracted keywords.
  • evaluation factor (or evaluation index) processing units 115 and 116 extract values of evaluation factors (or evaluation indexes) from one or more information of a patent specification, Bibliographical information, prosecution history information, claims, and drawings and stores the extracted values in the evaluation factor DB 194.
  • evaluation factors ones for evaluating Korean patents may differ from others for evaluating foreign patents.
  • evaluation factors for evaluating Korean patents are first listed as below:
  • evaluation factors for evaluating a foreign patent e.g., a U.S. patent, may be listed in the following table:
  • evaluation factors are merely examples, and any information that may be directly induced from patent information may be utilized as evaluation factors (evaluation indexes). Further, any information that may be indirectly obtained or induced by processing patent information may be used as evaluation factors (evaluation indexes).
  • the evaluation engine or an artificially intelligent evaluation-bot (or evaluation agent) units 116 and 136 evaluate each patent for each of predetermined evaluation items based on the evaluation factors and evaluation mechanism stored in the evaluation factor DBs 193 and 194 and produce results of the evaluation. Further, the evaluation engine or an artificially intelligent evaluation-bot (or evaluation agent) units 116 and 136 may store the produced evaluation results in the evaluation result DBs 197 and 198.
  • the evaluation items may be defined as the strength of patent right, quality of technology, and usability. Or, the evaluation items may be defined as strength of patent right and marketability (or, commercial potential). Such definition may be changed depending on what is the main object of patent evaluation. Accordingly, the scope of the present invention is not limited to those listed above, and may be expanded to anything to which the scope of the present invention may apply.
  • the evaluation mechanism may include a weight and a machine learning model.
  • the weight may be a value obtained from expert’s (or patent technician’s)evaluation results with respect to several sample patents.
  • the evaluation engine or an artificially intelligent evaluation-bot (or evaluation agent) units 116 and 136 will be described below in detail with reference to Fig. 4.
  • the domestic/foreign patent service servers 120 and 140 may include one or more of evaluation report generating units 121 and 141 and portfolio analyzing units 122 and 142.
  • the evaluation report generating units 121 and 141 generate evaluation reports based on evaluation results stored in the evaluation result DBs 197 and 198.
  • the portfolio analyzing units 122 and 142 may analyze portfolios of patents owned by a patent owner based on the information stored in the similar patent DBs 195 and 196. Further, the portfolio analyzing units 122 and 142 may analyze patent statuses for each right owner by technology (or technical field) or by IPC classification.
  • the portfolio analyzing units 122 and 142 may perform various types of analysis based on the similar patent DBs 195 and 196 and the evaluation result DBs 197 and 198.
  • the portfolio analyzing units 122 and 142 may perform various types of analysis such as patent trend analysis, or per-patentee total annual fees analysis.
  • the domestic/foreign patent service servers 120 and 140 upon receiving a request for an evaluation service for a predetermined patent from a user device, may provide results of evaluation of the specific patent case. Further, in response to a user’s request, the evaluation reports may be provided in the form of a webpage, an MS-excel file, or a PDF file, or an MS-word file or the results of analysis may be offered. To provide such service, a user authentication/authority managing unit 160 may be needed.
  • Fig. 4 is a view illustrating in detail an example of the configuration of domestic/foreign patent evaluation servers 110 and 130 as shown in Fig. 3.
  • the specification processing units 111 and 131 receive patent specification from the patent information DBs 191 and 192 and parse the patent specification.
  • the patent specification may be written in, e.g., XML, and the specification processing units 111 and 131 may include XML tag processing units to parse the XML.
  • the evaluation factor processing units 115 and 135 may include a first evaluation factor processing unit and a second evaluation factor processing unit for processing evaluation factors based on the parsed patent specification.
  • the first evaluation factor processing unit extracts values of evaluation factors that do not require the result of natural language processing based on the parsed patent specification. For example, the first evaluation factor processing unit calculates the values of evaluation factors that do not require natural language processing, such as a length of each independent claim, the number of claims, the number of claim categories, the number of independent claims, the number of domestic family patents, the number of foreign family patents as shown in Table 1 and stores the values in the evaluation factor DBs 193 and 194.
  • the natural language processing units 112 and 132 perform natural language processing based on the parsed patent specification.
  • the natural language processing units 112 and 132 include a morpheme analyzing unit and a TM analysis that work based on a dictionary DB.
  • the “morpheme” refers to the smallest meaningful unit that cannot be analyzed any further, and the “morpheme analysis” refers to the first step of analysis of natural language, which changes an input string of letters into a string of morphemes.
  • the second evaluation factor processing unit of the evaluation factor processing units 115 and 135 calculate values of remaining evaluation factors based on the result of the natural language processing. For example, a value of the evaluation factor such as “keyword consistency with similar foreign patent group” summarized in Table 1 above is calculated and stored in the evaluation factor DBs 193 and 194.
  • the keyword extracting units 113 and 133 that extract keywords based on the result of the natural language processing may include a keyword candidate selecting unit, an useless word removing unit, and a keyword selecting unit.
  • the keyword candidate selecting unit selects keyword candidates that may represent the subject of each patent.
  • the useless word removing unit removes useless words that have low importance from among the extracted keyword candidates.
  • the keyword selecting unit finally selects a proper number of keywords from among the remaining keyword candidates after the useless words have been removed and stores the selected keywords in the evaluation factor DBs 193 and 194.
  • the chance of being recalled (that is, the chance of a keyword to be reused) is 22.7% for 10 keywords and goes up to 54.1% for 50 keywords.
  • accuracy is 10.9%
  • accuracy is 20.6%.
  • a mere increase in the number of keywords although the mere increase is able to increase the rate of recall, may lead to the accuracy being lowered, and accordingly, an optimum number of keywords may be yielded based on the obtained accuracy.
  • the similar patent extracting units 114 and 134 search similar patents based on the keywords and may include a document clustering unit, a document similarity calculating unit, and a similar patent generating unit.
  • the document clustering unit primarily clusters similar patents based on the keywords.
  • the document similarity calculating unit calculates similarity between patent documents among the clustered patents.
  • the similar patent generating unit generates, as a result, actually closest patents among the primarily clustered patent documents and stores the result in the similar patent DBs 195 and 196.
  • the patent evaluation engine or an artificially intelligent evaluation-bot (or evaluation agent) units 116 and 136 include a machine learning model unit and a patent evaluation unit.
  • the machine learning model unit performs a machine learning based on the expert evaluation result DB. For this, an evaluation result for sample patents may be received from each expert per technology (i.e., technical field).
  • the sample patents are a set for the machine learning, and a few hundreds of patents to a few thousands of patents may be extracted from the patent information DBs 191 and 192 to select the sample patents.
  • the sample patents may be selected to evenly include the evaluation factors shown in Tables 1 and 2. For example, very few only of all the issued patents (about a few tens to a few millions of patents) have a non-zero value for some evaluation factors, such as the number of invalidation trials, the number of trials to confirm the scope of a patent, the number of defensive confirmation trials for the scope of a right, or the number of requests for accelerating appeal.
  • the sample patents may be divided into substantially a plurality of sets (for example, 10 sets). Among the plurality of sets, some may be used for machine learning, and the remainder may be used to verify the result of the machine learning.
  • the service servers 120 and 140 provide the values of the evaluation factors for each sample patent and the above-described evaluation items to the expert.
  • the service servers 120 and 140 provide the expert’s computer with a webpage listing the afore-described evaluation items (for example, strength of patent right, quality of technology, usability).
  • the service servers 120 and 140 provide the expert’s computer with a webpage listing the evaluation factors as summarized in Table 1 or 2.
  • the service servers 120 and 140 may map candidates of evaluation factors associated with each evaluation item and show the result of the mapping.
  • the expert puts a point (or score) for each evaluation item in the webpage while viewing the associated evaluation factor candidates for each evaluation item and the service servers 120 and 140 may receive the point (or score) and store the received point (or score) in the expert evaluation DB.
  • the machine learning model unit extracts evaluation factors actually associated with each evaluation item based on the expert’s evaluation results stored in the expert evaluation DB. Specifically, the machine learning model unit analyzes the correlation between each evaluation item and each evaluation factor based on the expert’s evaluation results stored in the expert evaluation DB. For example, it is analyzed based on the expert’s evaluation results stored in the expert evaluation DB whether when the value of the evaluation factor increases, the point (or score) of the evaluation item input by the expert increases or when the value of the evaluation factor decreases, the point (or score) of the evaluation item input by the expert increases.
  • a negative correlation value represents that a value of the evaluation item increases when a value of the evaluation factor decreases
  • a positive correlation value represents that a value of the evaluation item increase when a value of the evaluation factor increases.
  • the machine learning model unit extracts an evaluation factor having a high correlation with, a corresponding evaluation item such as “strength of patent right” among the evaluation items.
  • the correlation has a value between -1 and +1. Generally, its value, when included in a range from 0.2 to 0.6, is considered to be high. Accordingly, the machine learning model unit may select “the number of claims” “the number of independent claims” and “the number of claim categories” as evaluation factors for the “strength of patent right” among the evaluation items.
  • each expert per technology upon evaluation of patents, may exhibit different evaluation results, and to address such issue, a correlation between the experts may be additionally calculated according to an embodiment of the present invention.
  • technical fields may be categorized, e.g., into electronics, mechanics, chemistry, physics, and biology.
  • the experts per field may be grouped in pairs.
  • the correlation between experts A and B calculated for the electronics field is, as shown in Table 5, 0.64 for the Strength of Patent Right evaluation item, 0.39 for the Quality of Technology evaluation item, and 0.89 for the Usability evaluation item.
  • the correlation between paired experts is low, a result of the evaluation fulfilled by a pair of experts having a higher correlation may be used, and alternatively, a higher weight may be assigned to one of paired experts.
  • the machine learning model unit After defining the evaluation factors for each evaluation item in such a way, the machine learning model unit performs a machine learning based on the expert’s (or patent technican’s) evaluation results (e.g., the evaluation points or scores) stored in the expert evaluation DB.
  • the machine learning means to objectify the expert’s (or the patent technican’s) subjective evaluation result.
  • the machine learning model unit calculates a weight value based on an expert’s evaluation results, e.g., points (or scores) stored in the expert evaluation DB and performs a machine learning using the calculated weight.
  • the machine learning may be done per technology (or technical field).
  • the patent evaluation unit evaluates patent cases according to the result of the machine learning and stores the evaluated result in the evaluation result DBs 197 and 198.
  • Fig. 5 is a flowchart illustrating an example of an algorithm for processing some evaluation items.
  • the evaluation servers 110 and 130 first determine whether the number of independent claims of a patent being subject to be evaluated, i.e., a patent targeted for evaluation is two or more (S510).
  • the family patent case may mean a patent existent in the same country.
  • the individual evaluation factors when organically or systemically organized in algorithm, may be named complex evaluation factors (or complex factors).
  • Fig. 6 is a flowchart illustrating an example of another algorithm for processing some evaluation items.
  • the evaluation servers 110 and 130 first assign initial (or basic) points (S611). Subsequently, the evaluation servers 110 and 130 determine whether the number of independent claims in a patent targeted for evaluation is two or more (S612).
  • the family patent case may mean a patent in the same country.
  • the evaluation servers 110 and 130 determine whether the number of dependent claims is large (S615). In this case, such determination on whether the number of dependent claims is large may be done through a comparison between the number of dependent claims of the targeted patent and a predetermined value. Or, whether the number of dependent claims is large or not may be done by making comparison with an average value of dependent claims for all the issued patents.
  • step S612 described above the number of independent claims is two or more, the points increase (S618), and whether a family patent case is present is determined (S619).
  • the evaluation servers 110 and 130 increase the points (S620) and determine whether an apparatus (or product) and method claim exist (S621). If no apparatus (or product) and method claims are present, the process goes back to step S615 described above.
  • the evaluation factors such as the number of independent claims, the number of dependent claims, and the number of claim categories have ambiguous standards, and thus, the evaluation factors are, rather than determined individually, determined organically or systemically, thus leading to more quantitative and object evaluation.
  • the individual evaluation factors when organically or systemically organized in algorithm as shown in Fig. 6, may be named complex evaluation factors (or simply complex factors).
  • Fig. 7 is a flowchart illustrating an example of still another algorithm for processing some evaluation items.
  • the evaluation servers 110 and 130 first assign basic points (S711).
  • the evaluation servers 110 and 130 determine whether the total number of claims in a patent targeted for evaluation is more than a predetermined threshold (S712).
  • the threshold may be 20.
  • the points are increased by a predetermined value (S713), and whenever the total number of claims exceeds by one, the points are also increased by a predetermined value (S714).
  • the evaluation servers 110 and 130 determine whether the number of independent claims in the targeted patent is two or more (S715).
  • the evaluation servers 110 and 130 determine whether an apparatus (or product) claim is present (S718). If no apparatus (or product) claim exists, the points are decreased by a predetermined value (S719).
  • the evaluation servers 110 and 130 determine whether the number of dependent claims is large (S720). In this case, such determination on whether the number of dependent claims is large may be done through a comparison between the number of dependent claims in the targeted patent and a predetermined value. Or, whether the number of dependent claims is large or not may be done by making comparison with an average value of the number of dependent claims for all the issuedissued patents.
  • the evaluation servers 110 and 130 increase the points by a predetermined value (S721). However, if the number of dependent claims is small, the evaluation servers 110 and 130 reduce the points by a predetermined value (S722).
  • step S715 if in step S715 above the number of independent claims is two or more, the evaluation servers 110 and 130 increase the points (S723) and determine whether the number of independent claims is more than three (S724).
  • the evaluation servers 110 and 130 increase the points by a predetermined value whenever the number exceeds by one (S725).
  • the evaluation servers 110 and 130 determine whether there is a family patent case (S726). If no family patent case exists, the process goes to step S729.
  • the evaluation servers 110 and 130 increase the points by a predetermined value (S727) and increases the points by a predetermined value whenever the number exceeds by one (S728).
  • the evaluation servers 110 and 130 determine whether an apparatus (or product) and method claim exists (S729). If no apparatus (or product) and method claim exists, the process returns to the above-described step S728.
  • the points are increased by a predetermined value (S730), and it is determined whether the number of dependent claims is large (S731). If the number of dependent claims is large, the points are increased (S732). However, if the number of dependent claims is small, the points are reduced or the process is terminated.
  • the individual evaluation factors when organically or systemically organized in algorithm as shown in Fig. 7, may be named complex evaluation factors (simply complex factors).
  • Fig. 8 is a flowchart illustrating an example of an algorithm for evaluating the possibility of invalidation of a patent (or patent stability) based on complex factors.
  • the evaluation servers 110 and 130 first determine whether an invalidation trial (or appeal) has been filed before (S801).
  • the evaluation servers 110 and 130 proceed with a process based on the corrected patent publication including invalidation result, i.e., except for invalidated claims (S806).
  • the evaluation servers 110 and 130 assign initial points to each claim (S807).
  • initial points per claim are assigned based on the remaining claims except for the invaildiated claims.
  • each independent claim is assigned more points, as basic points, than each dependent claim is.
  • a weight may be provided to each independent claim. For example, under the circumstance of 20 points in total, in case the total number of claims is 11 with one independent claim and the other ten dependent claims, the independent claim is assigned 10 points while the remaining ten dependent claims are each assigned one, thus ending up with 20 points in total.
  • the evaluation servers 110 and 130 search whether there is a prior art document that has not been cited during the prosecution (S808).
  • Such search on whether there is a not-cited prior art document may be achieved by performing an epidemiologic investigation on forward citation or backward citation.
  • prior art documents include prior art document C, as well as patent A, prior art document C may work as the prior art for patent A.
  • the evaluation servers 110 and 130 decrease the points for each independent claim by a predetermined value (S809).
  • a predetermined value e.g. 20%
  • the evaluation servers 110 and 130 determine whether there is a prior art document that has not been cited during prosecution , among the prior art documents acquired from the family patent (S810). In case a not-cited prior art document acquired from family patent is present, the points per independent claim are decreased by a predetermined value as described above (S811).
  • the evaluation servers 110 and 130 determine whether the length of each independent claim is larger than an average (S812). In case the length of each independent claim is larger than the average, the evaluation servers 110 and 130 increase the points per independent claim by a predetermined value (S813).
  • the evaluation servers 110 and 130 determine whether the depth of dependent claims is larger than an average (S814).
  • the “depth of dependent claims” means, when the dependent claims stepwise depend from claims, how many steps the dependent claims have. For example, assuming that claim 1 is an independent claim, and claims 2 to 5 are dependent claims, when claim 2 depends from claim 1, claim 3 depends from claim 2, claim 4 depends from claim 3, and claim 5 depends from claim 4, it could be said, in this case, that the depth of the dependent claims is four steps.
  • the evaluation servers 110 and 130 determine how many family patents are present for the patent targeted for evaluation (S817).
  • the “family patents” mean patents claiming the same priority or patents acquired by filing divisional/double applications. If the number of family patents for the patent targeted for evaluation is not 0, the points obtained by summing the points of the claims increased or decreased thus far are increased by a predetermined value (S818). However, in case the number of family patents is 0, the total points are decreased by a predetermined value (S819). This is why there being a few family patents led to several times of examination which would not be done otherwise, thus achieving more faithful results.
  • the individual evaluation factors when organically organized in algorithm, may be named complex evaluation factors (simply, complex factors).
  • Fig. 9 is a flowchart illustrating an example of yet still another algorithm for evaluating the possibility of invalidation of a domestic patent (or patent stability) based on complex factors.
  • the evaluation servers 110 and 130 determine whether a corrected patent publication after the patent is issued is present (S901).
  • the evaluation servers 110 and 130 goes to step S915 and assign initial points to each claim based on the corrected patent publication (S915).
  • the evaluation servers 110 and 130 check the number of domestic/foreign family patents (S902). In case the number of family patents is 0, the evaluation servers 110 and 130 decrease the total assigned points by a predetermined value (S903).
  • the evaluation servers 110 and 130 assign initial points to each claim from the remaining points after the deduction depending on the number of claims.
  • each independent claim is assigned more points as basic points than each dependent claim.
  • the evaluation servers 110 and 130 determine whether an invalidation trial (or appeal) has been filed before (S905). If an invalidation trial has been filed, it is determined whether there is a written trial decision (S906).
  • the evaluation servers 110 and 130 decrease the points per independent claim by a predetermined value (S910).
  • the evaluation servers 110 and 130 search the presence of any prior art document that has not been cited during the prosecution (S911). In case a not-cited prior art document is present, the evaluation servers 110 and 130 decrease the points per independent claim by a predetermined value (S912).
  • the evaluation servers 110 and 130 determine whether there is a not-cited prior art document among the prior art documents acquired from the family patents (S913). In case a not-cited prior art document is present, the points per independent claim are decreased by a predetermined as described above (S914).
  • the evaluation servers 110 and 130 determine whether the length of each independent claim is larger than an average (S916). In case the length of each independent claim is smaller than the average, the evaluation servers 110 and 130 decrease the points per independent claim by a predetermined value (S917), and otherwise, increase the points per independent claim by a predetermined value (S918).
  • the evaluation servers 110 and 130 determine whether the number of the limitations in each independent claim is larger than an average for all of the patents (S919). If the number of limitations is smaller than the average, the evaluation servers 110 and 130 decrease the points per independent claim by a predetermined value (S920), and otherwise, increase the points per independent claim by a predetermined value (S921).
  • the evaluation servers 110 and 130 determine whether the depth of dependent claims is larger than an average for all of the patents (S814). In case the depth of dependent claims is smaller than the average for all of the patents, the evaluation servers 110 and 130 decrease the points per dependent claim by a predetermined value (S923), and otherwise, increase the points per dependent claim by a predetermined value (S924).
  • the evaluation servers 110 and 130 determine whether there has been a provision of information or an appeal against the examiner’s final rejection (S915). If any, the total points obtained by summing the points of the claims increased or decreased thus far are increased by a predetermined value (S926).
  • the individual evaluation factors when organically organized in algorithm as shown in Fig. 6, may be named complex evaluation factors (or simply complex factors).
  • Fig. 10 is a flowchart illustrating an example of an algorithm for evaluating the possibility of invalidation of a U.S. patent (or patent stability) based on complex factors.
  • the evaluation servers 110 and 130 assign initial points to each claim depending on the number of the claims in a patent targeted for evaluation (S1001). As described above, in assigning initial points to each claim, each independent claim is assigned more points as basic points than each dependent claim.
  • the evaluation servers 110 and 130 search the presence of any prior art document that has not been cited during the prosecution (S1002). If there is a not-cited prior art document, the evaluation servers 110 and 130 decrease the points per independent claim by a predetermined value (S1003).
  • the evaluation servers 110 and 130 determine whether there is a prior art document that has not been cited among the prior art documents acquired from the family patents (S1004). In case there is not-cited prior art document, the process goes to step S1010. However, in case there is a not-cited prior art document, it is determined whether a family patent citing the prior art document has been issued earlier or later than the patent targeted for evaluation (S1005). If the family patent has been issued earlier than the patent targeted for evaluation, the evaluation servers 110 and 130 determine whether the similarity between the patent targeted for evaluation and the prior technology is high (S1006). If the similarity is high, it means that the duty of filing an IDS (Information Disclosure Statement) under 35 U.S.C. (the U.S. patent act) has not been fulfilled, and thus, maximum point deduction is performed (S1007) and the process is terminated.
  • IDS Information Disclosure Statement
  • the evaluation servers 110 and 130 determine whether the similarity between the patent targeted for evaluation and the prior technology is high (S1008). If the similarity is high, the evaluation servers 110 and 130 decrease the points per independent claim by a predetermined value (S1009).
  • the evaluation servers 110 and 130 determine whether the length of each independent claim is larger than an average (S1010). In case the length of each independent claim is smaller than the average, the evaluation servers 110 and 130 decrease the points per independent claim by a predetermined value (S1012), and otherwise, increase the points per independent claim by a predetermined value (S1013).
  • the evaluation servers 110 and 130 determine whether the depth of dependent claims is larger than an average for all of the patents (S1013). If the depth of dependent claims is smaller than the average for all of the patents, the evaluation servers 110 and 130 decrease the points per dependent claim by a predetermined value (S1014), and otherwise, increase the points per dependent claim by a predetermined value (S1015).
  • the evaluation servers 110 and 130 determine whether there have been a reexamination or reissue under 35 U.S.C. (S1016). If any, the total points obtained by summing the points of claims increased or decreased thus far are increased by a predetermined value (S1017).
  • the evaluation factors such as the total number of claims, the number of independent claims, the number of dependent claims, the number of claim categories, reexamination, and reissue have unclear standards. Accordingly, rather than individually determined, the evaluation factors may be organically determined, thus leading to more quantitative and objective evaluation. As such, the individual evaluation factors, when organically organized in algorithm as shown in Fig. 10, may be named complex evaluation factors (simply complex factors).
  • Fig. 11 is a flowchart illustrating another algorithm for evaluating the possibility of invalidation of a U.S. patent (or patent stability) based on complex factors.
  • the evaluation servers 110 and 130 assign initial points to each claim depending on the number of the claims in a patent targeted for evaluation (S1101).
  • the evaluation servers 110 and 130 search the presence of any prior art document that has not been cited during the prosecution (S1102). As such, in case there is a not-cited prior art document, the evaluation servers 110 and 130 decrease the points per independent claim by a predetermined value (S1103).
  • the evaluation servers 110 and 130 determine whether there is a prior art document that has not been cited among the priority art documents acquired from the family patents (S1104). In case there is a not-cited prior art document, it is determined whether a family patent citing the corresponding prior art document has been first issued earlier or later than the patent targeted for evaluation (S1105). If the family patent has been issued earlier, the evaluation servers 110 and 130 determine whether the similarity between the patent targeted for evaluation and the priority technology is high (S1106). If the similarity is high, it means that the duty to file an IDS (Information Disclosure Statement) under 35 U.S.C. has not been fulfilled, and thus, a maximum point deduction process is conducted (S1107), followed by the termination of the process.
  • IDS Information Disclosure Statement
  • the evaluation servers 110 and 130 determine whether the similarity between the patent targeted for evaluation and the priority technology is high (S1109). If the similarity is high, the evaluation servers 110 and 130 decrease the points per independent claim by a predetermined value (S1110).
  • the evaluation servers 110 and 130 determine whether the length of each independent claim is larger than an average (S1111). In case the length of each independent claim is smaller than the average, the evaluation servers 110 and 130 decrease the points per independent claim by a predetermined value (S1112), and otherwise, increase the points per independent claim by a predetermined value (S1113).
  • the evaluation servers 110 and 130 determine whether the depth of dependent claims is larger than an average for all of the patents (S1114). In case the depth of dependent claims is smaller than the average for all of the patents, the evaluation servers 110 and 130 decrease the points per dependent claim by a predetermined value (S1015), and otherwise, increase the points per dependent claim by a predetermined value (S1116).
  • the evaluation servers 110 and 130 determine the number of family patents of the patent targeted for evaluation (S1117). In case the number of the family patents of the patent targeted for evaluation is 0, the total points obtained by summing the points of the claims increased or decreased thus far are decreased by a predetermined value (S1118).
  • the evaluation servers 110 and 130 determine whether there has been a reissue under 35 U.S.C. (S1119). If any, the total points obtained by summing the points increased or decreased thus far are increased by a predetermined value (S1120).
  • the evaluation servers 110 and 130 determine whether there has been a reexamination under 35 U.S.C. (S1121). If any, the points are increased to the maximum points (S1123).
  • the evaluation factors such as the total number of claims, the number of independent claims, the number of dependent claims, the number of claim categories, reexamination, and reissue have unclear standards. Accordingly, rather than determined individually, the evaluation factors may be organically determined, thus allowing for more quantitative and objective evaluation.
  • the individual evaluation factors when organically organized in algorithm as shown in Fig. 8, may be named complex evaluation factors (or simply complex factors).
  • Fig. 12 is a flowchart illustrating a method of establishing an evaluation engine or an artificially intelligent evaluation-bot (or evaluation agent) by performing a machine learning about an expert’s evaluation result according to an embodiment of the present invention.
  • evaluation items may be previously defined (S1210).
  • the evaluation items may be, as described earlier, defined as strength of patent right, quality of technology, and usability. Or, the evaluation items may also be defined as strength of patent right and marketability (or, commercial potential). Such definitions may be changed depending on what goals are to be achieved by evaluating patents.
  • the service servers 120 and 140 primarily map evaluation items with evaluation factors for sample patents and provide the result of the mapping to an expert’s computer (S1220).
  • the primary mapping may be to map the candidates of evaluation factors inferred to be associated with each evaluation item.
  • the result of evaluating the sample patents may be received from the expert’s computer (S1230).
  • the evaluation result may be points given by the expert to the evaluation items.
  • the service servers 120 and 140 may prepare for a webpage to provide information to the expert’s computer and to receive a result of evaluation.
  • the correlation possessed by the evaluation factors for one or more prepared evaluation items may be yielded based on the expert’s evaluation result for the sample patents (S1240).
  • the correlation may have a value from -1 to +1 as described above.
  • remapping may be done between each evaluation item and evaluation factors based on the yielded correlation (S1250). Some of the evaluation factors primarily mapped to each evaluation item by such remapping may be excluded from mapping, and other evaluation factors may be mapped to arbitrary evaluation items as well.
  • the evaluation factors mapped to the evaluation items may be used to perform a machine learning about the expert’s evaluation result, thereby establishing an evaluation engine or an artificially intelligent evaluation-bot (or evaluation agent) (S1260).
  • Fig. 13 is a flowchart illustrating a method of providing a patent evaluation service using an evaluation engine or an artificially intelligent evaluation-bot (or evaluation agent) according to an embodiment of the present invention.
  • the service servers 120 and 140 may receive information on a specific patent from a user device (S1310) and may receive a request for evaluating the specific patent from the user device (S1320). For this purpose, the service servers 120 and 140 may provide a webpage to the user’s computer.
  • the service servers 120 and 140 may provide a result of the evaluation that has been yielded on a specific patent identified using the information, using a previously established evaluation engine or an artificially intelligent evaluation-bot (or evaluation agent), so that the result may be output through the user’s computer (S1330).
  • the service servers 120 and 140 may simply provide the result of evaluation only. However, the service servers 120 and 140 may also generate an evaluation report and may provide the generated evaluation report to the user’s computer.
  • the evaluation report may include the yielded evaluation result and additional description on the evaluation result. Such evaluation report may be made in the PDF format or may be based on a webpage.
  • ASICs application specific integrated circuits
  • DSPs digital signal processors
  • DSPDs digital signal processing devices
  • PLDs programmable logic devices
  • FPGAs field programmable gate arrays
  • processors controllers, microcontrollers, or microprocessors.
  • the software codes may be stored in a memory unit and may be driven by a processor.
  • the memory units may be positioned inside or outside the processor and may send and receive data to/from the processor via various known means.
  • Fig. 14 illustrates the physical configuration of evaluation servers 110 and 130 and service servers 120 and 140 according to an embodiment of the present invention.
  • the evaluation servers 110 and 130 may include transmitting/receiving units 110a and 130a, controllers 110b and 130b, and storage units 110c and 130c, and the service servers 120 and 140 may transmitting/receiving units 120a and 140a, controllers 120b and 140b, and storage units 120c and 140c.
  • the storage units store the methods illustrated in Figs. 4 to 13 and what has been described.
  • the storage units 110c and 130c of the evaluation servers 110 and 130 may a program in which the above-described specification processing units 111 and 131, natural language processing units 112 and 132, keyword extracting units 113 and 133, similar patent extracting units 114 and 134, evaluation factor (or evaluation index) processing units 115 and 135, and patent evaluation engine or an artificially intelligent evaluation-bot (or evaluation agent) units 116 and 136 are implemented.
  • the storage units 120c and 140c of the service servers 120 and 140 may store one or more of the evaluation report generating units 121 and 141 and portfolio analysis units 122 and 142.
  • the controllers control the transmitting/receiving units and the storage units. Specifically, the controllers execute the programs or the methods stored in the storage units. The controllers transmit and receive signals through the transmitting/receiving units.

Abstract

Provided is a method of evaluating a patent based on complex factors. The method may be performed by a computer and comprise: receiving an evaluation request for a specific patent from a user device; and providing an evaluation result, which is yielded for the specific patent using an evaluation engine, to the user device. The evaluation engine may include an algorithm in which two or more of the total number of claims, the number of independent claims, and the number of dependent claims are organically systematized into the complex factor. The evaluation engine may yield the evaluation result using the complex factors of the algorithm.

Description

METHOD FOR EVALUATING PATENTS BASED ON COMPLEX FACTORS
This disclosure relates to a method for evaluating patents.
CROSS-REFERENCE TO RELATED APPLICATIONS
This application claims the benefit of priority of Korean Patent applications NO. 10-2012-0144315 filed on December 12, 2012 and NO. 10-2012-0144325 filed on December 12, 2012 of which are incorporated by reference in their entirety herein
BACKGROUND ART
Recent intellectual property (IP) strategies for protecting technologies of some companies are enjoying as significant achievements as other developed countries do. IP owners possessing a number of IPs suffer from costs and efforts of maintenance of registered IPs.
Further, it is not easy to, among registered IP rights, distinguish ones unnecessary to retain from others that must be intensively invested.
Accordingly, IP owners evaluate their IP rights on their own or entrust profit/non-profit organizations to conduct IP evaluation.
Meanwhile, results of evaluation of patents may be utilized for various purposes, such as maintenance of patents, offer of strategies for utilizing patents, support for research planning, estimation of patents in light of right, economy, and environment, invention evaluation, grasp of a critical invention and priority, association with business strategies (strategic alliance), allocation of R&D planning resources, technology evaluation for a loan from a financial organization, evaluation for choosing a provider (subject) of a government direct/indirect technical development support business, evaluation for intangible assets, evaluating customers’ intangible assets into current values based on clear and objective materials in consideration of technical, economical, and social aspects, compensation for inventors, asset evaluation (for depreciation), evaluation for IPs for the purpose of technology trade (technology transfer, M&A, etc.), evaluation of IP rights for loaning a technology, or attraction of investment.
However, data analysis that is part of a process of reporting an evaluated patent to an IP owner is high time- and cost-consuming and requires many skilled workers. Further, a majority of work is done manually, thus leading to the need for a further objective technology evaluating system and method.
Fig. 1 is a view illustrating the necessity of introducing a patent evaluation system.
As can be seen from Fig. 1(a), a great number of patents, e.g., a few hundreds of patents or a few thousands of patents, when left to a specialist for evaluation, require a significant time and costs.
However, as shown in Fig. 1(b), in case the patents first go through filtering, a small number of patents only (e.g., a few tens of patents only) may be requested to be evaluated by a specialist, and this may save a great deal of time and expense.
Meanwhile, some standards, which are used when patents are automatically evaluated, are unclear. For example, in case the number of independent claims is the same as the number of dependent claims, it is unclear how many of the claims are good or bad.
Accordingly, an embodiment of this disclosure aims to suggest a system for evaluating a patent. Further, an embodiment of this disclosure aims to suggest a complex factors algorithm for some vague standards used when a patent is automatically evaluated by a system. Further, another embodiment of this disclosure aims to suggest a complex factors algorithm for vague standards used when evaluating possibility of invalidation (or patent stability) of a patent in automatically evaluating a patent by a system.
To address the above-described issues, there is a provided a method of evaluating a patent based on complex factors. The method may comprises: receiving an evaluation request for a specific patent from a user device; and providing an evaluation result, which is yielded for the specific patent using an evaluation engine or an artificially intelligent evaluation-bot (or evaluation agent), to the user device. The evaluation engine may include an algorithm in which two or more of the total number of claims, the number of independent claims, and the number of dependent claims are organically systematized into the complex factor. The evaluation engine may yield the evaluation result using the complex factors of the algorithm.
The yielding the evaluation result using the complex factors of the algorithm may include: determining whether the number of independent claims with respect to the specific patent is equal to or higher than a predetermined number; determining whether there exists a family patent with respect to the specific patent, when the number of independent claims is less than the predetermined number; and determining whether there exist both an apparatus claim (or product claim) and a method claim in the specific patent, when the number of independent claims is the predetermined number of more,.
The yielding the evaluation result using the complex factors of the algorithm may include: determining whether the total number of claims with respect to the specific patent is higher than a predetermined number.
The yielding the evaluation result using the complex factors of the algorithm may include: determining whether the number of dependent claims is higher than a predetermined number.
The evaluation engine may be built by performing a machine learning about expert’s (or patent technician’s) evaluation results for sample patents in view of each evaluation item.
The evaluation engine is built by one or more of: calculating correlations between evaluation factors and one or more predetermined evaluation items based on the expert’s (or patent technician’s) evaluation results for the sample patents; mapping the evaluation items with the evaluation factors based on the calculated correlations; and performing the machine learning about the expert’s evaluation results using the evaluation factors mapped with the evaluation items.
The evaluation factors may include: information extracted from one or more of bibliographical information, prosecution history information, a specification, and patented claims; or information extracted by performing a natural language process on the specification and the patented claims.
The expert’s evaluation results may be performed for each technical field. So, the evaluation engine may be built for each technical field. So, the providing of the evaluation result may be performed using an evaluation engine of a technical field corresponding to a technical field of the specific patent.
the building of the evaluation engine may include: calculating correlation values between the results evaluated by a plurality of experts in each technical field; and building the evaluation engine based on the expert’s evaluation result having the highest correlation value among the calculated correlation values.
To address the above-described issues, there is a provided an evaluation server. The evaluation server may comprise: an interface unit configured to receive an evaluation request for a specific patent from a user device; and an evaluation engine unit configured to generated an evaluation result for the specific patent. The interface unit provides the generated evaluation result to the user device. The evaluation engine unit may include an algorithm in which two or more of the total number of claims, the number of independent claims, and the number of dependent claims are organically systematized into the complex factor. The evaluation engine yields the evaluation result using the complex factors of the algorithm.
To address the above-described issues, there is a provided a method of computerly evaluating a possibility of invalidation or patent stability of a requested patent case based on complex factors. The method may include: receiving an evaluation request for a specific patent case from a user device; and providing an evaluation result, which is yielded for the specific patent using an evaluation engine, to the user device. The evaluation engine may an algorithm in which two or more of a factor about the length of an independent claim of the specific patent case, a factor about whether there is a prior technical document for the specific patent case, a factor about whether there is a family patent, and are organically systematized into the complex factor. The evaluation engine may determine how much possibility of the invalidation the specific patent case has according to the complex factors.
The yielding the evaluation result using the complex factors of the algorithm may include: assigning reference points to each claim of the specific patent case according to the number of independent claims and the number of dependent claims; and increasing or decreasing the reference points of each claim by a predetermined value based on one or more of the length of an independent claim, a factor about whether there is a prior technical document for the specific patent case, a factor about whether there is a family patent.
The increasing or decreasing of the reference points of each claim may include: determining whether there is the prior technical document; and increasing or decreasing of the reference points of each claim based on the factor based on a factor whether the prior technical document has not been cited during a prosecution of the specific patent case or not.
The factor about whether there is the prior technical document is acquired from forward citation or backward citation or from a family patent.
According to an embodiment of this disclosure, a patent may be automatically evaluated by a system, and a result of the evaluation may be suggested. Further, according to an embodiment of this disclosure, an algorithm for some standards may be suggested, allowing for more quantitative, objective automatic evaluation for a patent.
Fig. 1 is a view illustrating the necessity of introducing a patent evaluation system;
Fig. 2 is a view illustrating the entire architecture of a patent evaluation system according to an embodiment of the present invention;
Fig. 3 is a view illustrating in detail one or more servers 100 as shown in Fig. 2;
Fig. 4 is a view illustrating in detail an example of the configuration of domestic/foreign patent evaluation servers 110 and 130 as shown in Fig. 3;
Fig. 5 is a flowchart illustrating an example of an algorithm for processing some evaluation items;
Fig. 6 is a flowchart illustrating an example of another algorithm for processing some evaluation items;
Fig. 7 is a flowchart illustrating an example of still another algorithm for processing some evaluation items;
Fig. 8 is a flowchart illustrating an example of an algorithm for evaluating the possibility of invalidation of a patent (or patent stability) based on complex factors;
Fig. 9 is a flowchart illustrating an example of yet still another algorithm for evaluating the possibility of invalidation of a domestic patent (or patent stability) based on complex factors;
Fig. 10 is a flowchart illustrating an example of an algorithm for evaluating the possibility of invalidation of a U.S. patent (or patent stability) based on complex factors;
Fig. 11 is a flowchart illustrating another algorithm for evaluating the possibility of invalidation of a U.S. patent (or patent stability) based on complex factors;
Fig. 12 is a flowchart illustrating a method of establishing an evaluation engine or an artificially intelligent evaluation-bot (or evaluation agent) by performing a machine learning about an expert’s evaluation result according to an embodiment of the present invention;
Fig. 13 is a flowchart illustrating a method of providing a patent evaluation service using an evaluation engine or an artificially intelligent evaluation-bot (or evaluation agent) according to an embodiment of the present invention; and
Fig. 14 illustrates the physical configuration of evaluation servers 110 and 130 and service servers 120 and 140 according to an embodiment of the present invention.
As used herein, the technical terms are used merely to describe predetermined embodiments and should not be construed as limited thereto. Further, as used herein, the technical terms, unless defined otherwise, should be interpreted as generally understood by those of ordinary skill in the art and should not be construed to be unduly broad or narrow. Further, when not correctly expressing the spirit of the present invention, the technical terms as used herein should be understood as ones that may be correctly understood by those of ordinary skill in the art. Further, the general terms as used herein should be interpreted as defined in the dictionary or in the context and should not be interpreted as unduly narrow.
As used herein, the singular form, unless stated otherwise, also includes the plural form. As used herein, the terms “including” or “comprising” should not be interpreted as necessarily including all of the several components or steps as set forth herein and should rather be interpreted as being able to further include additional components or steps.
Further, as used herein, the terms “first” and “second” may be used to describe various components, but these components are not limited thereto. The terms are used only for distinguishing one component from another. For example, without departing from the scope of the present invention, a first component may also be referred to as a second component, and the second component may likewise be referred to as the first component.
Hereinafter, preferred embodiments of the present invention will be described in detail with reference to the accompanying drawings. The same reference numerals may refer to the same or similar elements throughout the specification and the drawings.
When determined to make the gist of the present invention unclear, the detailed description of the present invention is skipped. Further, the accompanying drawings are provided merely to give a better understanding of the spirit of the present invention, and the present invention should not be limited thereto.
Fig. 2 is a view illustrating the entire architecture of a patent evaluation system according to an embodiment of the present invention.
As can be seen from Fig. 2, a patent evaluation system according to an embodiment of the present invention includes one or more servers 100 and one or more databases (hereinafter, simply referred to as “DB”) 190. The one or more servers 100 may be remotely managed by a managing device 500.
The one or more servers 100 are connected to a wired/wireless network and may provide a user device 600 with an evaluation result service and other various services. Specifically, when receiving a request for an evaluation service for a specific patent case from the user device, the one or more servers 100 may provide a result from evaluating the specific patent case.
Fig. 3 is a view illustrating in detail one or more servers 100 as shown in Fig. 2.
As shown in Fig. 3, one or more servers 100 may include an evaluation server 100 for domestic patents (e.g., Korean patents), a service server 120 for domestic patents (e.g., Korean patents), an evaluation server 130 for foreign patents (e.g., U.S. patents), and a service server 140 for foreign patents (e.g., U.S. patents). Although in Fig. 3 the domestic patent evaluation server 110 and the foreign (e.g., U.S.) patent evaluation server 130 are, by way of example, physically separated from each other, these servers may be integrated into a single physical server. Further, the domestic patent service server 120 and the foreign (e.g., U.S.) patent service server 140 are shown to be physically separated from each other, but these servers may be integrated into a single physical server. Further, the servers 110, 120, 130, and 140 as illustrated may be integrated into a single physical server.
Further, as shown in Fig. 3, the above-described one or more databases 190 may include patent information DBs 191 and 192, evaluation factor (or evaluation index) DBs 193 and 194, similar patent DBs 195 and 196, and evaluation result DBs 197 and 198. Each DB is illustrated to be provided separately from each other for the purpose of each of evaluation of domestic patents and evaluation of foreign (e.g., U.S.) patents, and the DBs may be integrated into one. For example, the domestic (e.g., Korean) patent information DB 191 and the foreign (e.g., U.S.) patent information DB 192 may be integrated into one, and the domestic (e.g., Korean) evaluation factor (or evaluation index) DB 193 and the foreign (e.g., U.S.) evaluation factor (or evaluation index) DB 194 may be integrated into one. Alternatively, the DBs all may be integrated into one that may be divided into fields.
Such DBs may be generated based on what is received from an external DB provider. For such reception, the server 100 may include a data collecting unit 150 that receives a domestic (e.g., Korean) or foreign (e.g., U.S.) raw DB from the external DB provider. The data collecting unit 150 physically includes a network interface (NIC). Further, the data collecting unit 150 logically may be a program constituted of an API (Application Programming Interface). The data collecting unit 150 processes a raw DB received from the external DB provider and may store the received raw DB in one or more DBs 190, for example, patent information DBs 191 and 192 which are connected to the server 100.
Meanwhile, the domestic/foreign patent evaluation servers 110 and 130 may include one or more of specification processing units 111 and 131, natural language processing units 112 and 132, keyword processing units 113 and 133, similar patent processing units 114 and 134, evaluation factors (or evaluation indexes) processing unit 115 and 135, and evaluation engine or an artificially intelligent evaluation-bot (or evaluation agent) units 116 and 136.
The specification processing units 111 and 131 extract each information from the one or more DBs 190, for example, patent information DBs 191 and 192 and parse (or transform) the information. For example, the specification processing units 111 and 131 may extract one or more of a patent specification, bibliographic information, prosecution history information, claims, and drawings and may store the extracted information in each field of the evaluation factor (or evaluation index) DB.
The natural language processing units 112 and 132 perform a natural language process on text included in the extracted patent specification and the claims. As used herein, the “natural language process” refers to a computer analyzing a natural language used for, e.g., general conversation, rather than a special programming language for computers. For example, the natural language processing units 112 and 132 may conduct sentence analysis, syntax analysis, and a process of a mixed language. Further, the natural language processing units 112 and 132 may carry out a semantic process.
The keyword processing units 113 and 133 extract keywords from each patent based on a result of the natural language process. In order to extract keywords, a scheme such as a VSM (Vector Space Model) or LSA (Latent Sematic Analysis) may be used. As used herein, the “keyword” of a patent specification refers to word(s) that represents the subject of the patent specification, and for example, in the instant specification, the “patent evaluation” may be keywords. As such, it may be advantageous to extract as many keywords representing the subject of each patent specification as possible, but merely increasing the number of keywords extracted may rather lead to inaccuracy. Accordingly, selecting a proper number of keywords is important.
The similar patent processing units 114 and 134 may search patents closest to each patent based on the extracted keywords and may store the results of search in the similar patent DBs 195 and 196. In general, similar patent groups are known to belong to the same sub class in the IPC (International Patent Classification), but according to an embodiment of this disclosure, similar patents may be searched from other sub classes as well as the same sub class. As such, in order to increase accuracy when searching similar patents from other sub classes, it is most critical to precisely extract keywords. In particular, as described above, a mere increase in the number of keywords may lead to extraction of inaccurate keywords, thus resulting in completely different patents being searched from other sub classes as similar patents. Accordingly, according to an embodiment of this disclosure, a proper number of keywords are extracted based on a result of simulation depending on the number of keywords and similar patents are searched with the extracted keywords.
Meanwhile, the evaluation factor (or evaluation index) processing units 115 and 116 extract values of evaluation factors (or evaluation indexes) from one or more information of a patent specification, bibliographical information, prosecution history information, claims, and drawings and stores the extracted values in the evaluation factor DB 194.
Among the evaluation factors, ones for evaluating Korean patents may differ from others for evaluating foreign patents. For example, evaluation factors for evaluating Korean patents are first listed as below:
Table 1
Evaluation Factors Description
length of each independent claim Number of words in an independent claim
Number of claims Number of claims
Number of claim categories Number of categories of independent claims (product or method)
Number of independent claims Number of independent claims
Number of domestic family patents Number of domestic family patents (divisional applications, family of patent application claiming the same priority)
Number of foreign family patents Family patents of foreign countries
Number of annual fees Number of years after the issue
Whether there exists a request for accelerating examination Whether a request for accelerated examination has been made
Elapsed Days before request for examination Days from the filing date to date filing the request for examination
Number of responses filed to Office Action(s) Number of times in which responses have been filed
Number of appeals filed to Final Office Action Number of times in which appeals to final office actions have been filed
Number of backward citations Total number in which backward citation has been done
Number of joint applicants Number of joint applicants
Number of licensees Number of licensees
Number of trials to confirm the scope of a patent Number of times in which a trial has been filed
Number of requests for accelerating appeal Number of times in which an appeal has been filed
Whether the patent is published early by a request Whether the patent is published early by a request
Number of provisions of information by third-party Number of times in which provision of information has been made by third-party
Number of oppositions Number of times in which opposition has been filed
Whether the request for examination is filed by third party Which one of applicant or third party has filed a request for examination
Number of invalidation trials Number of times in which the trial has been filed
Number of defensive confirmation trials for the scope of a right Number of times in which the trial has been filed
Number of embodiments Number of embodiments
Number of drawings Number of drawings
Number of words in detailed description Number of words included in the detailed description section of a specification
Number of IPCs Number of IPC classification codes
Number of ownership changes Number of times in which ownership has been changed
Lawsuit information Number of times in which lawsuit, if any, has been filed
Number of prior art documents Number of prior technical documents cited during the examination
On the other hand, evaluation factors for evaluating a foreign patent, e.g., a U.S. patent, may be listed in the following table:
Table 2
Evaluation Factors Description
Length of independent claim Number of words in an independent claim
Number of claim categories Number of categories (product or method) of independent claims
Number of independent claims Number of independent claims
Number of words in detailed description Number of words in the detailed description of a specification
Total number of claims Number of claims
Number of in-U.S. family patents Number of family patents in U.S.
Number of Reexaminations Number of times in which reexamination has been filed
Number of interferences Number of times in which interference has been filed
Number of Reissues Number of times in which reissue has been filed
Number of Backward citations Total number of times in which backward citation has been done
Number of IPCs Number of IPC classification codes
Number of foreign family patents Number of family patents in foreign countries
Number of annual fees Number of times in which annual fees have been paid
Whether there exists a request for accelerating examination Whether request for accelerated examination has been made
Number of Certification of Corrections Number of times in which Certification of Correction has been filed
Number of Ownership changes number of times in which ownership has been changed
Lawsuit information Number of times in which lawsuit, if any, has been filed
Number of Prior art documents Number of prior technical documents cited during the examination
Meanwhile, the above-suggested evaluation factors (evaluation indexes) are merely examples, and any information that may be directly induced from patent information may be utilized as evaluation factors (evaluation indexes). Further, any information that may be indirectly obtained or induced by processing patent information may be used as evaluation factors (evaluation indexes).
Meanwhile, the evaluation engine or an artificially intelligent evaluation-bot (or evaluation agent) units 116 and 136 evaluate each patent for each of predetermined evaluation items based on the evaluation factors and evaluation mechanism stored in the evaluation factor DBs 193 and 194 and produce results of the evaluation. Further, the evaluation engine or an artificially intelligent evaluation-bot (or evaluation agent) units 116 and 136 may store the produced evaluation results in the evaluation result DBs 197 and 198.
The evaluation items may be defined as the strength of patent right, quality of technology, and usability. Or, the evaluation items may be defined as strength of patent right and marketability (or, commercial potential). Such definition may be changed depending on what is the main object of patent evaluation. Accordingly, the scope of the present invention is not limited to those listed above, and may be expanded to anything to which the scope of the present invention may apply.
The evaluation mechanism may include a weight and a machine learning model. The weight may be a value obtained from expert’s (or patent technician’s)evaluation results with respect to several sample patents. The evaluation engine or an artificially intelligent evaluation-bot (or evaluation agent) units 116 and 136 will be described below in detail with reference to Fig. 4.
The domestic/foreign patent service servers 120 and 140 may include one or more of evaluation report generating units 121 and 141 and portfolio analyzing units 122 and 142. The evaluation report generating units 121 and 141 generate evaluation reports based on evaluation results stored in the evaluation result DBs 197 and 198. The portfolio analyzing units 122 and 142 may analyze portfolios of patents owned by a patent owner based on the information stored in the similar patent DBs 195 and 196. Further, the portfolio analyzing units 122 and 142 may analyze patent statuses for each right owner by technology (or technical field) or by IPC classification. Besides, the portfolio analyzing units 122 and 142 may perform various types of analysis based on the similar patent DBs 195 and 196 and the evaluation result DBs 197 and 198. For example, the portfolio analyzing units 122 and 142 may perform various types of analysis such as patent trend analysis, or per-patentee total annual fees analysis.
As such, the domestic/foreign patent service servers 120 and 140, upon receiving a request for an evaluation service for a predetermined patent from a user device, may provide results of evaluation of the specific patent case. Further, in response to a user’s request, the evaluation reports may be provided in the form of a webpage, an MS-excel file, or a PDF file, or an MS-word file or the results of analysis may be offered. To provide such service, a user authentication/authority managing unit 160 may be needed.
Fig. 4 is a view illustrating in detail an example of the configuration of domestic/foreign patent evaluation servers 110 and 130 as shown in Fig. 3.
As can be seen from Fig. 4 and what has been described above, the specification processing units 111 and 131 receive patent specification from the patent information DBs 191 and 192 and parse the patent specification. The patent specification may be written in, e.g., XML, and the specification processing units 111 and 131 may include XML tag processing units to parse the XML.
The evaluation factor processing units 115 and 135 may include a first evaluation factor processing unit and a second evaluation factor processing unit for processing evaluation factors based on the parsed patent specification. The first evaluation factor processing unit extracts values of evaluation factors that do not require the result of natural language processing based on the parsed patent specification. For example, the first evaluation factor processing unit calculates the values of evaluation factors that do not require natural language processing, such as a length of each independent claim, the number of claims, the number of claim categories, the number of independent claims, the number of domestic family patents, the number of foreign family patents as shown in Table 1 and stores the values in the evaluation factor DBs 193 and 194.
The natural language processing units 112 and 132 perform natural language processing based on the parsed patent specification. The natural language processing units 112 and 132 include a morpheme analyzing unit and a TM analysis that work based on a dictionary DB. The “morpheme” refers to the smallest meaningful unit that cannot be analyzed any further, and the “morpheme analysis” refers to the first step of analysis of natural language, which changes an input string of letters into a string of morphemes. The TM analysis is a two-level analysis task and is represented as Tm = (R, F, D), where R is a set of rules, F is a finite automatic converter, and T is a try dictionary.
If the natural language processing is done, the second evaluation factor processing unit of the evaluation factor processing units 115 and 135 calculate values of remaining evaluation factors based on the result of the natural language processing. For example, a value of the evaluation factor such as “keyword consistency with similar foreign patent group” summarized in Table 1 above is calculated and stored in the evaluation factor DBs 193 and 194.
Meanwhile, the keyword extracting units 113 and 133 that extract keywords based on the result of the natural language processing may include a keyword candidate selecting unit, an useless word removing unit, and a keyword selecting unit. The keyword candidate selecting unit selects keyword candidates that may represent the subject of each patent. The useless word removing unit removes useless words that have low importance from among the extracted keyword candidates. The keyword selecting unit finally selects a proper number of keywords from among the remaining keyword candidates after the useless words have been removed and stores the selected keywords in the evaluation factor DBs 193 and 194.
The following table shows the accuracy per number of keywords.
Table 3
Keyword count 10 50
Rate of Recall 22.7% 54.1%
Accuracy 20.6% 10.9%
Referring to Table 3, the chance of being recalled (that is, the chance of a keyword to be reused) is 22.7% for 10 keywords and goes up to 54.1% for 50 keywords. However, when the number of keywords is 50, accuracy is 10.9%, whereas when the number of keywords is 10, accuracy is 20.6%. As set forth above, a mere increase in the number of keywords, although the mere increase is able to increase the rate of recall, may lead to the accuracy being lowered, and accordingly, an optimum number of keywords may be yielded based on the obtained accuracy.
The similar patent extracting units 114 and 134 search similar patents based on the keywords and may include a document clustering unit, a document similarity calculating unit, and a similar patent generating unit. The document clustering unit primarily clusters similar patents based on the keywords. The document similarity calculating unit calculates similarity between patent documents among the clustered patents. The similar patent generating unit generates, as a result, actually closest patents among the primarily clustered patent documents and stores the result in the similar patent DBs 195 and 196.
The patent evaluation engine or an artificially intelligent evaluation-bot (or evaluation agent) units 116 and 136 include a machine learning model unit and a patent evaluation unit. The machine learning model unit performs a machine learning based on the expert evaluation result DB. For this, an evaluation result for sample patents may be received from each expert per technology (i.e., technical field).
The sample patents are a set for the machine learning, and a few hundreds of patents to a few thousands of patents may be extracted from the patent information DBs 191 and 192 to select the sample patents. The sample patents may be selected to evenly include the evaluation factors shown in Tables 1 and 2. For example, very few only of all the issued patents (about a few tens to a few millions of patents) have a non-zero value for some evaluation factors, such as the number of invalidation trials, the number of trials to confirm the scope of a patent, the number of defensive confirmation trials for the scope of a right, or the number of requests for accelerating appeal. Accordingly, it is preferable to select the sample patents so that patents having a non-zero value for each evaluation factor are distributed at a predetermined ratio. Further, when picking up the sample patents, the patents may be divided into substantially a plurality of sets (for example, 10 sets). Among the plurality of sets, some may be used for machine learning, and the remainder may be used to verify the result of the machine learning.
In order to receive the result of the expert’s evaluation the service servers 120 and 140 provide the values of the evaluation factors for each sample patent and the above-described evaluation items to the expert. For example, the service servers 120 and 140 provide the expert’s computer with a webpage listing the afore-described evaluation items (for example, strength of patent right, quality of technology, usability). Further, the service servers 120 and 140 provide the expert’s computer with a webpage listing the evaluation factors as summarized in Table 1 or 2. At this time, the service servers 120 and 140 may map candidates of evaluation factors associated with each evaluation item and show the result of the mapping. Then, the expert puts a point (or score) for each evaluation item in the webpage while viewing the associated evaluation factor candidates for each evaluation item and the service servers 120 and 140 may receive the point (or score) and store the received point (or score) in the expert evaluation DB.
Then, the machine learning model unit extracts evaluation factors actually associated with each evaluation item based on the expert’s evaluation results stored in the expert evaluation DB. Specifically, the machine learning model unit analyzes the correlation between each evaluation item and each evaluation factor based on the expert’s evaluation results stored in the expert evaluation DB. For example, it is analyzed based on the expert’s evaluation results stored in the expert evaluation DB whether when the value of the evaluation factor increases, the point (or score) of the evaluation item input by the expert increases or when the value of the evaluation factor decreases, the point (or score) of the evaluation item input by the expert increases. A negative correlation value represents that a value of the evaluation item increases when a value of the evaluation factor decreases, and a positive correlation value represents that a value of the evaluation item increase when a value of the evaluation factor increases.
The machine learning model unit extracts an evaluation factor having a high correlation with, a corresponding evaluation item such as “strength of patent right” among the evaluation items. The correlation has a value between -1 and +1. Generally, its value, when included in a range from 0.2 to 0.6, is considered to be high. Accordingly, the machine learning model unit may select “the number of claims” “the number of independent claims” and “the number of claim categories” as evaluation factors for the “strength of patent right” among the evaluation items.
Meanwhile, each expert per technology (or technical field), upon evaluation of patents, may exhibit different evaluation results, and to address such issue, a correlation between the experts may be additionally calculated according to an embodiment of the present invention.
Table 4
Field Expert Correlation
Strength of Patent Right Quality of Technology Usability
Electronics A B 0.64 0.39 0.83
C D 0.29 0.32 0.48
Mechanics E F 0.60 0.23 0.55
G H 0.59 0.23 0.63
chemistry I J 0.66 0.71 0.60
K L 0.59 0.34 0.50
Physics M N 0.50 0.35 0.48
O P 0.81 0.15 0.80
Biology Q R 0.64 0.66 0.19
S T 0.51 0.45 0.38
As summarized in Table 4 above, technical fields may be categorized, e.g., into electronics, mechanics, chemistry, physics, and biology. After a plurality of experts is assigned to each field, the experts per field may be grouped in pairs. For example, in the case of electronics, experts A, B, C, and D are assigned to the electronics field, and then, experts A and B in pair conduct evaluation on the same issued patent, and in the same way, experts C and D in pair conduct evaluation on the same issued patent. In such case, the correlation between experts A and B calculated for the electronics field is, as shown in Table 5, 0.64 for the Strength of Patent Right evaluation item, 0.39 for the Quality of Technology evaluation item, and 0.89 for the Usability evaluation item. In case the correlation between paired experts is low, a result of the evaluation fulfilled by a pair of experts having a higher correlation may be used, and alternatively, a higher weight may be assigned to one of paired experts.
After defining the evaluation factors for each evaluation item in such a way, the machine learning model unit performs a machine learning based on the expert’s (or patent technican’s) evaluation results (e.g., the evaluation points or scores) stored in the expert evaluation DB. Here, the machine learning means to objectify the expert’s (or the patent technican’s) subjective evaluation result. Specifically, the machine learning model unit calculates a weight value based on an expert’s evaluation results, e.g., points (or scores) stored in the expert evaluation DB and performs a machine learning using the calculated weight. At this time, the machine learning may be done per technology (or technical field). As set forth earlier, the evaluation of sample patents performed by an expert is done for each technology, and the machine learning is also conducted for each technology. By way of example, in the case of mechanical or electronic field, as the length of an independent claim increases, the scope of the claim decreases. However, in the case of chemical field, the length of an independent claim may have nothing to do with the broadness or narrowness of the claim scope. Accordingly, the machine learning is performed for eachtechnology. Thus, the above-described weight may also be produced separately for each technical field.
Among the patent evaluation engine or an artificially intelligent evaluation-bot (or evaluation agent) units 116 and 136, the patent evaluation unit evaluates patent cases according to the result of the machine learning and stores the evaluated result in the evaluation result DBs 197 and 198.
Meanwhile, when a patent is automatically evaluated by the system as described above, the standards for some evaluation items may be unclear. For example, in the case of the number of independent claims, the number of dependent claims, or the number of claim categories , it is unclear how many is good or how many is bad.
Accordingly, an algorithm is suggested hereinafter in case the standards for some evaluation items are unclear.
Fig. 5 is a flowchart illustrating an example of an algorithm for processing some evaluation items.
Referring to Fig. 5, the evaluation servers 110 and 130 first determine whether the number of independent claims of a patent being subject to be evaluated, i.e., a patent targeted for evaluation is two or more (S510).
Unless the number of independent claims is two or more, it is determined whether a family patent case exists (S510). The family patent case may mean a patent existent in the same country.
If there is no family patent case, it is determined whether the number of dependent claims is large (S540). In this case, such determination on whether the number of dependent claims is large may be done through a comparison between the number of dependent claims of the targeted patent and a predetermined value. Or, whether the number of dependent claims is large may be determined by making a comparison with an average of the number of dependent claims for all the issued patents. If the number of dependent claims is large, an evaluation point increases (S550). Unless the number of dependent claims is large, the evaluation point is decreased (S560).
Meanwhile, in case there is a family patent case or the number of independent claims is two or more, it is determined whether an apparatus (or product) and method claim exists (S530). If no apparatus (or product) and method claim exist, the process returns to step S540.
However, in case the apparatus (or product) and method claim exists, the process goes back to step S550 to increase an evaluation point.
As described above, since the evaluation factors such as the number of independent claims, the number of dependent claims, and the number of claim categories have unclear standards, these evaluation factors, rather than being determined individually, are organically or systemically determined in algorithm, so that the evaluation factors may be evaluated more quantitatively and objectively.
As such, the individual evaluation factors, when organically or systemically organized in algorithm, may be named complex evaluation factors (or complex factors).
Fig. 6 is a flowchart illustrating an example of another algorithm for processing some evaluation items.
Referring to Fig. 6, the evaluation servers 110 and 130 first assign initial (or basic) points (S611). Subsequently, the evaluation servers 110 and 130 determine whether the number of independent claims in a patent targeted for evaluation is two or more (S612).
Unless the number of independent claims is two or more, it is determined whether there is a family patent case (S613). The family patent case may mean a patent in the same country.
If there is no family patent case, the initial (or basic) points are decreased (or reduced) (S614).
Thereafter, the evaluation servers 110 and 130 determine whether the number of dependent claims is large (S615). In this case, such determination on whether the number of dependent claims is large may be done through a comparison between the number of dependent claims of the targeted patent and a predetermined value. Or, whether the number of dependent claims is large or not may be done by making comparison with an average value of dependent claims for all the issued patents.
If the number of dependent claims is large, the points increase (S615). However, if the number of dependent claims is small, the points decrease (S617).
Meanwhile, if in step S612 described above the number of independent claims is two or more, the points increase (S618), and whether a family patent case is present is determined (S619).
If a family patent case is determined to be present in steps S613 and S619, the evaluation servers 110 and 130 increase the points (S620) and determine whether an apparatus (or product) and method claim exist (S621). If no apparatus (or product) and method claims are present, the process goes back to step S615 described above.
However, once an apparatus (or product) and method claim exists, it is determined whether the number of dependent claims is large (S623). Unless the number of dependent claims is large, the points increase (S624). However, if the number of dependent claims is small, the points decrease (S625).
As described above in connection with Fig. 6, the evaluation factors such as the number of independent claims, the number of dependent claims, and the number of claim categories have ambiguous standards, and thus, the evaluation factors are, rather than determined individually, determined organically or systemically, thus leading to more quantitative and object evaluation.
As such, the individual evaluation factors, when organically or systemically organized in algorithm as shown in Fig. 6, may be named complex evaluation factors (or simply complex factors).
Fig. 7 is a flowchart illustrating an example of still another algorithm for processing some evaluation items.
Referring to Fig. 7, the evaluation servers 110 and 130 first assign basic points (S711).
Subsequently, the evaluation servers 110 and 130 determine whether the total number of claims in a patent targeted for evaluation is more than a predetermined threshold (S712). For example, the threshold may be 20.
If the total number of claims in the targeted patent is in excess of the predetermined threshold, the points are increased by a predetermined value (S713), and whenever the total number of claims exceeds by one, the points are also increased by a predetermined value (S714).
Thereafter, the evaluation servers 110 and 130 determine whether the number of independent claims in the targeted patent is two or more (S715).
Unless the number of independent claims is two or more, it is determined whether there is a family patent case (S716). If there is no family patent case, the points are decreased by a predetermined value (S717).
Subsequently, the evaluation servers 110 and 130 determine whether an apparatus (or product) claim is present (S718). If no apparatus (or product) claim exists, the points are decreased by a predetermined value (S719).
Then, the evaluation servers 110 and 130 determine whether the number of dependent claims is large (S720). In this case, such determination on whether the number of dependent claims is large may be done through a comparison between the number of dependent claims in the targeted patent and a predetermined value. Or, whether the number of dependent claims is large or not may be done by making comparison with an average value of the number of dependent claims for all the issuedissued patents.
If the number of dependent claims is large, the evaluation servers 110 and 130 increase the points by a predetermined value (S721). However, if the number of dependent claims is small, the evaluation servers 110 and 130 reduce the points by a predetermined value (S722).
Meanwhile, if in step S715 above the number of independent claims is two or more, the evaluation servers 110 and 130 increase the points (S723) and determine whether the number of independent claims is more than three (S724).
In case the number of independent claims exceeds three, the evaluation servers 110 and 130 increase the points by a predetermined value whenever the number exceeds by one (S725).
Next, the evaluation servers 110 and 130 determine whether there is a family patent case (S726). If no family patent case exists, the process goes to step S729.
If it is determined in step S726 that there is no family patent case, the evaluation servers 110 and 130 increase the points by a predetermined value (S727) and increases the points by a predetermined value whenever the number exceeds by one (S728).
Therefore, the evaluation servers 110 and 130 determine whether an apparatus (or product) and method claim exists (S729). If no apparatus (or product) and method claim exists, the process returns to the above-described step S728.
However, if an apparatus (or product) and method claim exists, the points are increased by a predetermined value (S730), and it is determined whether the number of dependent claims is large (S731). If the number of dependent claims is large, the points are increased (S732). However, if the number of dependent claims is small, the points are reduced or the process is terminated.
As described above in connection with Fig. 7, the evaluation factors such as total number of claims, number of independent claims, numver of dependent claims, and number of claim categories have unclear standards, these evaluation factors are determined not individually but organically, thus resulting in more quantitative and objective evaluation.
As such, the individual evaluation factors, when organically or systemically organized in algorithm as shown in Fig. 7, may be named complex evaluation factors (simply complex factors).
Meanwhile, when a patent is automatically evaluated by a system as described above, some standards are sometimes unclear. For example, in case possibility of invalidation of a patent (or patent stability) is evaluated, the evaluation being made by a system, not a human, requires standards. Conventionally, there are no such standards, so that automatic evaluation by a system is impossible.
Accordingly, since standards for evaluating the possibility of invalidation of a patent (or patent stability) is ambiguous, a complex factor algorithm for addressing this is suggested hereinafter.
Fig. 8 is a flowchart illustrating an example of an algorithm for evaluating the possibility of invalidation of a patent (or patent stability) based on complex factors.
Referring to Fig. 8, the evaluation servers 110 and 130 first determine whether an invalidation trial (or appeal) has been filed before (S801).
If the invalidation trial has been filed, it is determined whether there is a written trial decision (S802).
If a written trial decision is present, it is also determined that the invalidation trial has been conclusively decided or the invalidation trial is withdrawn or the litigation against a trial decision is in progress (S803).
If the invalidation trial has been withdrawn after the trial decision had been made or if the litigation against the trial decision is ongoing, initial points (or scores) are assigned to each claim (S804), and a maximum point of invalidated claims is decreased (S805), and the process then goes to step S814.
However, if the invalidation trial is conclusively decided, the evaluation servers 110 and 130 proceed with a process based on the corrected patent publication including invalidation result, i.e., except for invalidated claims (S806).
In case there is no written trial decision, the evaluation servers 110 and 130 assign initial points to each claim (S807). At this time, in case the corrected patent publication including the invalidation result (i.e., except for invalidated claims) is present, initial points per claim are assigned based on the remaining claims except for the invaildiated claims. In assigning initial points to each claim, each independent claim is assigned more points, as basic points, than each dependent claim is. For this purpose, a weight may be provided to each independent claim. For example, under the circumstance of 20 points in total, in case the total number of claims is 11 with one independent claim and the other ten dependent claims, the independent claim is assigned 10 points while the remaining ten dependent claims are each assigned one, thus ending up with 20 points in total.
Subsequently, the evaluation servers 110 and 130 search whether there is a prior art document that has not been cited during the prosecution (S808). Such search on whether there is a not-cited prior art document may be achieved by performing an epidemiologic investigation on forward citation or backward citation. For example, in case, for posterior patent B that cites patent A targeted for evaluation, prior art documents include prior art document C, as well as patent A, prior art document C may work as the prior art for patent A.
In case, as above, there is a not-cited prior art document, the evaluation servers 110 and 130 decrease the points for each independent claim by a predetermined value (S809). For example, as described above, if 10 points are assigned to the independent claim, and the predetermined value is, e.g., 20%, 10*20%=2 may be decreased, thus leaving the independent claim to have 10-2=8 points.
Next, the evaluation servers 110 and 130 determine whether there is a prior art document that has not been cited during prosecution , among the prior art documents acquired from the family patent (S810). In case a not-cited prior art document acquired from family patent is present, the points per independent claim are decreased by a predetermined value as described above (S811).
Subsequently, the evaluation servers 110 and 130 determine whether the length of each independent claim is larger than an average (S812). In case the length of each independent claim is larger than the average, the evaluation servers 110 and 130 increase the points per independent claim by a predetermined value (S813).
Then, the evaluation servers 110 and 130 determine whether the depth of dependent claims is larger than an average (S814). The “depth of dependent claims” means, when the dependent claims stepwise depend from claims, how many steps the dependent claims have. For example, assuming that claim 1 is an independent claim, and claims 2 to 5 are dependent claims, when claim 2 depends from claim 1, claim 3 depends from claim 2, claim 4 depends from claim 3, and claim 5 depends from claim 4, it could be said, in this case, that the depth of the dependent claims is four steps.
In case the depth of dependent claims is larger than an average for all of the patents, points per dependent claim are increased by a predetermined value (S815), and otherwise, the points per dependent claim are decreased by a predetermined value (S816).
Next, the evaluation servers 110 and 130 determine how many family patents are present for the patent targeted for evaluation (S817). The “family patents” mean patents claiming the same priority or patents acquired by filing divisional/double applications. If the number of family patents for the patent targeted for evaluation is not 0, the points obtained by summing the points of the claims increased or decreased thus far are increased by a predetermined value (S818). However, in case the number of family patents is 0, the total points are decreased by a predetermined value (S819). This is why there being a few family patents led to several times of examination which would not be done otherwise, thus achieving more faithful results.
As described above, whether an invalidation trial has been filed, whether there is a prior art document, and whether a family patent is present are, rather than being determined individually, organically determined in algorithm, thus allowing for more quantitative and objective evaluation for the possibility of invalidation of a patent.
As such, the individual evaluation factors, when organically organized in algorithm, may be named complex evaluation factors (simply, complex factors).
Fig. 9 is a flowchart illustrating an example of yet still another algorithm for evaluating the possibility of invalidation of a domestic patent (or patent stability) based on complex factors.
Referring to Fig. 9, the evaluation servers 110 and 130 determine whether a corrected patent publication after the patent is issued is present (S901).
In case there is the corrected patent publication after the issue, the evaluation servers 110 and 130 goes to step S915 and assign initial points to each claim based on the corrected patent publication (S915).
In case there is no corrected publication, the evaluation servers 110 and 130 check the number of domestic/foreign family patents (S902). In case the number of family patents is 0, the evaluation servers 110 and 130 decrease the total assigned points by a predetermined value (S903).
Subsequently, the evaluation servers 110 and 130 assign initial points to each claim from the remaining points after the deduction depending on the number of claims. As set forth above, in assigning initial points to each claim, each independent claim is assigned more points as basic points than each dependent claim.
Then, the evaluation servers 110 and 130 determine whether an invalidation trial (or appeal) has been filed before (S905). If an invalidation trial has been filed, it is determined whether there is a written trial decision (S906).
If a written trial decision exists, it is determined whether the patent has been decided to be invalid, and if so, whether it is partially or wholly invalid (S907). If the patent is decided to be valid, the process is terminated without any additional point deduction (S908). However, in case of decided to be partially or wholly invalid, the invalidated claims are subjected to a maximum point deduction process (S909) and then the process ends.
If there is no written trial decision, this might be because the trial has been withdrawn before decision or is underway, and thus, the legal status may be considered to be unstable. Therefore, the evaluation servers 110 and 130 decrease the points per independent claim by a predetermined value (S910).
Subsequently, the evaluation servers 110 and 130 search the presence of any prior art document that has not been cited during the prosecution (S911). In case a not-cited prior art document is present, the evaluation servers 110 and 130 decrease the points per independent claim by a predetermined value (S912).
Next, the evaluation servers 110 and 130 determine whether there is a not-cited prior art document among the prior art documents acquired from the family patents (S913). In case a not-cited prior art document is present, the points per independent claim are decreased by a predetermined as described above (S914).
Then, the evaluation servers 110 and 130 determine whether the length of each independent claim is larger than an average (S916). In case the length of each independent claim is smaller than the average, the evaluation servers 110 and 130 decrease the points per independent claim by a predetermined value (S917), and otherwise, increase the points per independent claim by a predetermined value (S918).
Next, the evaluation servers 110 and 130 determine whether the number of the limitations in each independent claim is larger than an average for all of the patents (S919). If the number of limitations is smaller than the average, the evaluation servers 110 and 130 decrease the points per independent claim by a predetermined value (S920), and otherwise, increase the points per independent claim by a predetermined value (S921).
Next, the evaluation servers 110 and 130 determine whether the depth of dependent claims is larger than an average for all of the patents (S814). In case the depth of dependent claims is smaller than the average for all of the patents, the evaluation servers 110 and 130 decrease the points per dependent claim by a predetermined value (S923), and otherwise, increase the points per dependent claim by a predetermined value (S924).
Subsequently, the evaluation servers 110 and 130 determine whether there has been a provision of information or an appeal against the examiner’s final rejection (S915). If any, the total points obtained by summing the points of the claims increased or decreased thus far are increased by a predetermined value (S926).
As described above in connection with Fig. 9, whether an invalidation trial has been filed, whether there is a prior art document, whether there is a family patent, the length of each independent claim, and whether there is a provision of information are, rather than individually determined, organically judged in algorithm, thus allowing for more quantitative and object evaluation of the possibility of invalidation.
As such, the individual evaluation factors, when organically organized in algorithm as shown in Fig. 6, may be named complex evaluation factors (or simply complex factors).
Meanwhile, the steps in Fig. 9 may be properly combined with the steps in Fig. 8. Accordingly, it should be noted that the present invention is not limited to what is illustrated in the drawings.
Fig. 10 is a flowchart illustrating an example of an algorithm for evaluating the possibility of invalidation of a U.S. patent (or patent stability) based on complex factors.
First, the evaluation servers 110 and 130 assign initial points to each claim depending on the number of the claims in a patent targeted for evaluation (S1001). As described above, in assigning initial points to each claim, each independent claim is assigned more points as basic points than each dependent claim.
Subsequently, the evaluation servers 110 and 130 search the presence of any prior art document that has not been cited during the prosecution (S1002). If there is a not-cited prior art document, the evaluation servers 110 and 130 decrease the points per independent claim by a predetermined value (S1003).
Next, the evaluation servers 110 and 130 determine whether there is a prior art document that has not been cited among the prior art documents acquired from the family patents (S1004). In case there is not-cited prior art document, the process goes to step S1010. However, in case there is a not-cited prior art document, it is determined whether a family patent citing the prior art document has been issued earlier or later than the patent targeted for evaluation (S1005). If the family patent has been issued earlier than the patent targeted for evaluation, the evaluation servers 110 and 130 determine whether the similarity between the patent targeted for evaluation and the prior technology is high (S1006). If the similarity is high, it means that the duty of filing an IDS (Information Disclosure Statement) under 35 U.S.C. (the U.S. patent act) has not been fulfilled, and thus, maximum point deduction is performed (S1007) and the process is terminated.
However, if the family patent citing the corresponding prior art document has been issued later than the patent targeted for evaluation, the evaluation servers 110 and 130 determine whether the similarity between the patent targeted for evaluation and the prior technology is high (S1008). If the similarity is high, the evaluation servers 110 and 130 decrease the points per independent claim by a predetermined value (S1009).
Next, the evaluation servers 110 and 130 determine whether the length of each independent claim is larger than an average (S1010). In case the length of each independent claim is smaller than the average, the evaluation servers 110 and 130 decrease the points per independent claim by a predetermined value (S1012), and otherwise, increase the points per independent claim by a predetermined value (S1013).
Subsequently, the evaluation servers 110 and 130 determine whether the depth of dependent claims is larger than an average for all of the patents (S1013). If the depth of dependent claims is smaller than the average for all of the patents, the evaluation servers 110 and 130 decrease the points per dependent claim by a predetermined value (S1014), and otherwise, increase the points per dependent claim by a predetermined value (S1015).
Then, the evaluation servers 110 and 130 determine whether there have been a reexamination or reissue under 35 U.S.C. (S1016). If any, the total points obtained by summing the points of claims increased or decreased thus far are increased by a predetermined value (S1017).
As described above in connection with Fig. 10, the evaluation factors such as the total number of claims, the number of independent claims, the number of dependent claims, the number of claim categories, reexamination, and reissue have unclear standards. Accordingly, rather than individually determined, the evaluation factors may be organically determined, thus leading to more quantitative and objective evaluation. As such, the individual evaluation factors, when organically organized in algorithm as shown in Fig. 10, may be named complex evaluation factors (simply complex factors).
Fig. 11 is a flowchart illustrating another algorithm for evaluating the possibility of invalidation of a U.S. patent (or patent stability) based on complex factors.
First, the evaluation servers 110 and 130 assign initial points to each claim depending on the number of the claims in a patent targeted for evaluation (S1101).
Subsequently, the evaluation servers 110 and 130 search the presence of any prior art document that has not been cited during the prosecution (S1102). As such, in case there is a not-cited prior art document, the evaluation servers 110 and 130 decrease the points per independent claim by a predetermined value (S1103).
Next, the evaluation servers 110 and 130 determine whether there is a prior art document that has not been cited among the priority art documents acquired from the family patents (S1104). In case there is a not-cited prior art document, it is determined whether a family patent citing the corresponding prior art document has been first issued earlier or later than the patent targeted for evaluation (S1105). If the family patent has been issued earlier, the evaluation servers 110 and 130 determine whether the similarity between the patent targeted for evaluation and the priority technology is high (S1106). If the similarity is high, it means that the duty to file an IDS (Information Disclosure Statement) under 35 U.S.C. has not been fulfilled, and thus, a maximum point deduction process is conducted (S1107), followed by the termination of the process.
However, if the family patent citing the prior art document has been issued later than the patent targeted for evaluation, the evaluation servers 110 and 130 determine whether the similarity between the patent targeted for evaluation and the priority technology is high (S1109). If the similarity is high, the evaluation servers 110 and 130 decrease the points per independent claim by a predetermined value (S1110).
Next, the evaluation servers 110 and 130 determine whether the length of each independent claim is larger than an average (S1111). In case the length of each independent claim is smaller than the average, the evaluation servers 110 and 130 decrease the points per independent claim by a predetermined value (S1112), and otherwise, increase the points per independent claim by a predetermined value (S1113).
Subsequently, the evaluation servers 110 and 130 determine whether the depth of dependent claims is larger than an average for all of the patents (S1114). In case the depth of dependent claims is smaller than the average for all of the patents, the evaluation servers 110 and 130 decrease the points per dependent claim by a predetermined value (S1015), and otherwise, increase the points per dependent claim by a predetermined value (S1116).
Then, the evaluation servers 110 and 130 determine the number of family patents of the patent targeted for evaluation (S1117). In case the number of the family patents of the patent targeted for evaluation is 0, the total points obtained by summing the points of the claims increased or decreased thus far are decreased by a predetermined value (S1118).
Subsequently, the evaluation servers 110 and 130 determine whether there has been a reissue under 35 U.S.C. (S1119). If any, the total points obtained by summing the points increased or decreased thus far are increased by a predetermined value (S1120).
Next, the evaluation servers 110 and 130 determine whether there has been a reexamination under 35 U.S.C. (S1121). If any, the points are increased to the maximum points (S1123).
As described above in connection with Fig. 11, the evaluation factors such as the total number of claims, the number of independent claims, the number of dependent claims, the number of claim categories, reexamination, and reissue have unclear standards. Accordingly, rather than determined individually, the evaluation factors may be organically determined, thus allowing for more quantitative and objective evaluation.
As such, the individual evaluation factors, when organically organized in algorithm as shown in Fig. 8, may be named complex evaluation factors (or simply complex factors).
Meanwhile, the steps in Fig. 11 may be properly combined with the steps in Fig. 10. Accordingly, it should be noted that the present invention is not limited to what is shown in the drawings.
Hereinafter, a method of establishing an evaluation engine or an artificially intelligent evaluation-bot (or evaluation agent) is described.
Fig. 12 is a flowchart illustrating a method of establishing an evaluation engine or an artificially intelligent evaluation-bot (or evaluation agent) by performing a machine learning about an expert’s evaluation result according to an embodiment of the present invention.
As can be seen from Fig. 12 and what has been described above, evaluation items may be previously defined (S1210). The evaluation items may be, as described earlier, defined as strength of patent right, quality of technology, and usability. Or, the evaluation items may also be defined as strength of patent right and marketability (or, commercial potential). Such definitions may be changed depending on what goals are to be achieved by evaluating patents.
Subsequently, the service servers 120 and 140 primarily map evaluation items with evaluation factors for sample patents and provide the result of the mapping to an expert’s computer (S1220). The primary mapping may be to map the candidates of evaluation factors inferred to be associated with each evaluation item.
Next, the result of evaluating the sample patents may be received from the expert’s computer (S1230). The evaluation result may be points given by the expert to the evaluation items. As such, the service servers 120 and 140 may prepare for a webpage to provide information to the expert’s computer and to receive a result of evaluation.
Subsequently, the correlation possessed by the evaluation factors for one or more prepared evaluation items may be yielded based on the expert’s evaluation result for the sample patents (S1240). The correlation may have a value from -1 to +1 as described above.
Next, remapping may be done between each evaluation item and evaluation factors based on the yielded correlation (S1250). Some of the evaluation factors primarily mapped to each evaluation item by such remapping may be excluded from mapping, and other evaluation factors may be mapped to arbitrary evaluation items as well.
As such, if mapping is done, the evaluation factors mapped to the evaluation items may be used to perform a machine learning about the expert’s evaluation result, thereby establishing an evaluation engine or an artificially intelligent evaluation-bot (or evaluation agent) (S1260).
Fig. 13 is a flowchart illustrating a method of providing a patent evaluation service using an evaluation engine or an artificially intelligent evaluation-bot (or evaluation agent) according to an embodiment of the present invention.
As can be seen from Fig. 13, the service servers 120 and 140 may receive information on a specific patent from a user device (S1310) and may receive a request for evaluating the specific patent from the user device (S1320). For this purpose, the service servers 120 and 140 may provide a webpage to the user’s computer.
Then, the service servers 120 and 140 may provide a result of the evaluation that has been yielded on a specific patent identified using the information, using a previously established evaluation engine or an artificially intelligent evaluation-bot (or evaluation agent), so that the result may be output through the user’s computer (S1330).
At this time, the service servers 120 and 140 may simply provide the result of evaluation only. However, the service servers 120 and 140 may also generate an evaluation report and may provide the generated evaluation report to the user’s computer. The evaluation report may include the yielded evaluation result and additional description on the evaluation result. Such evaluation report may be made in the PDF format or may be based on a webpage.
The embodiments disclosed herein have been described with reference to the accompanying drawings. Here, the above-described methods may be implemented by various means. For example, the embodiments of the present invention may be embodied in hardware, firmware, or software, or a combination thereof.
When implemented in hardware, methods according to embodiments of the present invention may be realized in one or more ASICs (application specific integrated circuits), DSPs (digital signal processors), DSPDs (digital signal processing devices), PLDs (programmable logic devices), FPGAs (field programmable gate arrays), processors, controllers, microcontrollers, or microprocessors.
When implemented in firmware or software, methods according to embodiments of the present invention may be realized in modules, procedures, or functions that perform the above-described functions or operations. The software codes may be stored in a memory unit and may be driven by a processor. The memory units may be positioned inside or outside the processor and may send and receive data to/from the processor via various known means.
Fig. 14 illustrates the physical configuration of evaluation servers 110 and 130 and service servers 120 and 140 according to an embodiment of the present invention.
As shown in Fig. 14, the evaluation servers 110 and 130 may include transmitting/receiving units 110a and 130a, controllers 110b and 130b, and storage units 110c and 130c, and the service servers 120 and 140 may transmitting/receiving units 120a and 140a, controllers 120b and 140b, and storage units 120c and 140c.
The storage units store the methods illustrated in Figs. 4 to 13 and what has been described. For example, the storage units 110c and 130c of the evaluation servers 110 and 130 may a program in which the above-described specification processing units 111 and 131, natural language processing units 112 and 132, keyword extracting units 113 and 133, similar patent extracting units 114 and 134, evaluation factor (or evaluation index) processing units 115 and 135, and patent evaluation engine or an artificially intelligent evaluation-bot (or evaluation agent) units 116 and 136 are implemented. The storage units 120c and 140c of the service servers 120 and 140 may store one or more of the evaluation report generating units 121 and 141 and portfolio analysis units 122 and 142.
The controllers control the transmitting/receiving units and the storage units. Specifically, the controllers execute the programs or the methods stored in the storage units. The controllers transmit and receive signals through the transmitting/receiving units.
The embodiments disclosed herein have been described thus far with reference to the accompanying drawings. Here, the terms or words used in the specification and claims should not be construed as limited to the meanings commonly used or included in the dictionary, but should be rather interpreted to have the meanings and concept that fit for the technical spirit disclosed herein.
Accordingly, the embodiments disclosed herein are merely an example of the present invention and do not represent all the technical spirit as disclosed herein, and accordingly, it should be understood that various equivalents and changes may be made thereto, which may replace the embodiments of the present invention.

Claims (19)

  1. A method of evaluating a patent based on complex factors, the method performed by a computer and comprising:
    receiving, by the computer, an evaluation request for a specific patent from a user device; and
    providing, by the computer, an evaluation result, which is yielded for the specific patent using an evaluation engine of the computer, to the user device,
    wherein the evaluation engine includes an algorithm in which two or more of the total number of claims, the number of independent claims, and the number of dependent claims are organically systematized into the complex factor, and
    wherein the evaluation engine yields the evaluation result using the complex factors of the algorithm.
  2. The method of claim 1, wherein the yielding the evaluation result using the complex factors of the algorithm includes:
    determining whether the number of independent claims with respect to the specific patent is equal to or higher than a predetermined number;
    determining whether there exists a family patent with respect to the specific patent, when the number of independent claims is less than the predetermined number,;
    determining whether there exist both an apparatus (or product) claim and a method claim in the specific patent, when the number of independent claims is the predetermined number of more,.
  3. The method of claim 2, wherein the yielding the evaluation result using the complex factors of the algorithm includes:
    determining whether the total number of claims with respect to the specific patent is higher than a predetermined number.
  4. The method of claim 2, wherein the yielding the evaluation result using the complex factors of the algorithm includes:
    determining whether the number of dependent claims is higher than a predetermined number.
  5. The method of claim 1, wherein the evaluation engine is built by performing a machine learning about patent technician’s evaluation results for sample patents in view of each evaluation item.
  6. The method of claim 5, wherein the evaluation engine is built by one or more of:
    calculating correlations between evaluation factors and one or more predetermined evaluation items based on the patent technician’s evaluation results for the sample patents;
    mapping the evaluation items with the evaluation factors based on the calculated correlations; and
    performing the machine learning about the patent technician’s evaluation results using the evaluation factors mapped with the evaluation items.
  7. The method of claim 6, wherein the evaluation factors include
    information extracted from one or more of bibliographical information, prosecution history information, a specification, and patented claimsissued or
    information extracted by performing a natural language process on the specification and the patented claimsissued.
  8. The method of claim 5, wherein the patent technician’s evaluation results are performed for each technical field,
    wherein the evaluation engine is built for each technical field, and
    wherein the providing of the evaluation result is performed using an evaluation engine of a technical field corresponding to a technical field of the specific patent.
  9. The method of claim 5, wherein
    the building of the evaluation engine includes
    calculating correlation values between the results evaluated by a plurality of patent technicians in each technical field; and
    building the evaluation engine based on the patent technician’s evaluation result having the highest correlation value among the calculated correlation values.
  10. An evaluation server comprising:
    an interface unit configured to receive an evaluation request for a specific patent from a user device; and
    an evaluation engine unit configured to generated an evaluation result for the specific patent,
    wherein the interface unit provides the generated evaluation result to the user device, and
    wherein the evaluation engine unit includes an algorithm in which two or more of the total number of claims, the number of independent claims, and the number of dependent claims are organically systematized into the complex factor and
    wherein the evaluation engine yields the evaluation result using the complex factors of the algorithm.
  11. A method of computerly evaluating a possibility of invalidation or patent stability of a requested patent case based on complex factors, the method performed by a computer and comprising:
    receiving, by the computer, an evaluation request for a specific patent case from a user device; and
    providing, by the computer, an evaluation result, which is yielded for the specific patent using an evaluation engine, to the user device
    wherein the evaluation engine includes an algorithm in which two or more of a factor about the length of an independent claim of the specific patent case, a factor about whether there is a prior art document for the specific patent case, a factor about whether there is a family patent, and are organically systematized into the complex factor and
    wherein the evaluation engine determines how much possibility of the invalidation the specific patent case has according to the complex factors.
  12. The method of claim 11, the yielding the evaluation result using the complex factors of the algorithm includes:
    assigning reference points to each claim of the specific patent case according to the number of independent claims and the number of dependent claims; and
    increasing or decreasing the reference points of each claim by a predetermined value based on one or more of the length of an independent claim, a factor about whether there is a prior art document for the specific patent case, a factor about whether there is a family patent.
  13. The method of claim 12, wherein the increasing or decreasing of the reference points of each claim includes:
    determining whether there is the prior art document
    increasing or decreasing of the reference points of each claim based on the factor based on a factor whether the prior art document has not been cited during a prosecution of the specific patent case or not.
  14. The method of claim 12, wherein the factor about whether there is the prior art document is acquired from forward citation or backward citation or from a family patent.
  15. The method of claim 11, wherein the evaluation engine is built by performing a machine learning about patent technician’s evaluation results for sample patents in view of each evaluation item.
  16. The method of claim 15, wherein the evaluation engine is built by one or more of:
    calculating correlations between evaluation factors and one or more predetermined evaluation items based on the patent technician’s evaluation results on the sample patents;
    mapping the evaluation items with the evaluation factors based on the calculated correlations; and
    performing the machine learning about the patent technician’s evaluation results using the evaluation factors mapped with the evaluation items.
  17. The method of claim 16, wherein the evaluation factors include
    information extracted from one or more of bibliographical information, prosecution history information, a specification, and patented claims, or
    information extracted by performing a natural language process on the specification and the patented claims.
  18. The method of claim 15, wherein the patent technician’s evaluation results are performed for each technical field,
    wherein the evaluation engine is built for each technical field, and
    wherein the providing of the evaluation result is performed using an evaluation engine of a technical field corresponding to a technical field of the specific patent.
  19. The method of claim 15, wherein the building of the evaluation engine includes
    calculating correlation values between the results evaluated by a plurality of patent technicians in each technical field; and
    building the evaluation engine based on the patent technician’s evaluation result having the highest correlation value among the calculated correlation values.
PCT/KR2013/010950 2012-12-12 2013-11-29 Method for evaluating patents based on complex factors WO2014092360A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
KR1020120144315A KR101456188B1 (en) 2012-12-12 2012-12-12 Method for automatically evaluating invalidation of patents based on complex factors
KR10-2012-0144315 2012-12-12
KR10-2012-0144325 2012-12-12
KR1020120144325A KR101456187B1 (en) 2012-12-12 2012-12-12 Method for evaluating patents based on complex factors

Publications (1)

Publication Number Publication Date
WO2014092360A1 true WO2014092360A1 (en) 2014-06-19

Family

ID=50934598

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2013/010950 WO2014092360A1 (en) 2012-12-12 2013-11-29 Method for evaluating patents based on complex factors

Country Status (1)

Country Link
WO (1) WO2014092360A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3489885A1 (en) * 2017-11-27 2019-05-29 Korea Invention Promotion Association System and method for valuating patent using multiple regression model and system and method for building patent valuation model using multiple regression model
US20210004921A1 (en) * 2019-07-03 2021-01-07 Aon Risk Services, Inc. Of Maryland Analysis Of Intellectual-Property Data In Relation To Products And Services
CN113191870A (en) * 2021-01-19 2021-07-30 迅鳐成都科技有限公司 Intellectual property value evaluation method and system based on block chain

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20080048828A (en) * 2006-11-29 2008-06-03 한국기술거래소 Method and system for valuating a validity of a commercialization of an idea, method and system for servicing an editing of a technology evaluating report and computer readable medium storing a program thereof
US20100114587A1 (en) * 2006-11-02 2010-05-06 Hiroaki Masuyama Patent evaluating device
KR20110068277A (en) * 2009-12-15 2011-06-22 한국발명진흥회 Patent rating system and rating factor information processing method of the same system
KR20120095593A (en) * 2011-02-21 2012-08-29 아이피텍코리아 주식회사 Method of evaluating the value of technology for patent

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100114587A1 (en) * 2006-11-02 2010-05-06 Hiroaki Masuyama Patent evaluating device
KR20080048828A (en) * 2006-11-29 2008-06-03 한국기술거래소 Method and system for valuating a validity of a commercialization of an idea, method and system for servicing an editing of a technology evaluating report and computer readable medium storing a program thereof
KR20110068277A (en) * 2009-12-15 2011-06-22 한국발명진흥회 Patent rating system and rating factor information processing method of the same system
KR20120095593A (en) * 2011-02-21 2012-08-29 아이피텍코리아 주식회사 Method of evaluating the value of technology for patent

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3489885A1 (en) * 2017-11-27 2019-05-29 Korea Invention Promotion Association System and method for valuating patent using multiple regression model and system and method for building patent valuation model using multiple regression model
EP4033443A1 (en) * 2017-11-27 2022-07-27 Korea Invention Promotion Association A system for provide patent valuation results to a user device
US11687321B2 (en) 2017-11-27 2023-06-27 Korea Invention Promotion Association System and method for valuating patent using multiple regression model and system and method for building patent valuation model using multiple regression model
US20210004921A1 (en) * 2019-07-03 2021-01-07 Aon Risk Services, Inc. Of Maryland Analysis Of Intellectual-Property Data In Relation To Products And Services
US11941714B2 (en) * 2019-07-03 2024-03-26 Aon Risk Services, Inc. Of Maryland Analysis of intellectual-property data in relation to products and services
CN113191870A (en) * 2021-01-19 2021-07-30 迅鳐成都科技有限公司 Intellectual property value evaluation method and system based on block chain
CN113191870B (en) * 2021-01-19 2023-08-08 迅鳐成都科技有限公司 Intellectual property value evaluation method and system based on blockchain

Similar Documents

Publication Publication Date Title
WO2019117466A1 (en) Electronic device for analyzing meaning of speech, and operation method therefor
WO2021060899A1 (en) Training method for specializing artificial intelligence model in institution for deployment, and apparatus for training artificial intelligence model
WO2012134180A2 (en) Emotion classification method for analyzing inherent emotions in a sentence, and emotion classification method for multiple sentences using context information
WO2021182921A1 (en) Technology similarity determination method using neural network
EP3811234A1 (en) Electronic device and method for controlling the electronic device
WO2018082484A1 (en) Screen capturing method and system for electronic device, and electronic device
WO2015178600A1 (en) Speech recognition method and apparatus using device information
EP3834076A1 (en) Electronic device and control method thereof
WO2014092360A1 (en) Method for evaluating patents based on complex factors
WO2021029642A1 (en) System and method for recognizing user's speech
WO2021157897A1 (en) A system and method for efficient multi-relational entity understanding and retrieval
WO2018174397A1 (en) Electronic device and control method
WO2022164192A1 (en) Device and method for providing recommended sentences related to user's speech input
WO2023191129A1 (en) Monitoring method for bill and legal regulation and program therefor
CN112020710A (en) Information providing system and information providing method
JP2020166782A (en) Information provision system and information provision method
WO2011068315A4 (en) Apparatus for selecting optimum database using maximal concept-strength recognition technique and method thereof
WO2021246812A1 (en) News positivity level analysis solution and device using deep learning nlp model
CN111538817A (en) Man-machine interaction method and device
WO2019107674A1 (en) Computing apparatus and information input method of the computing apparatus
WO2021025465A1 (en) Method for recognizing voice and electronic device supporting the same
WO2017191877A1 (en) Compression device and method for managing provenance
WO2019074185A1 (en) Electronic apparatus and control method thereof
EP3659073A1 (en) Electronic apparatus and control method thereof
WO2023132657A1 (en) Device, method, and program for providing product trend prediction service

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13862152

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13862152

Country of ref document: EP

Kind code of ref document: A1