WO2009039311A2 - Method for a contextual, vector-based content-serving system - Google Patents

Method for a contextual, vector-based content-serving system Download PDF

Info

Publication number
WO2009039311A2
WO2009039311A2 PCT/US2008/076905 US2008076905W WO2009039311A2 WO 2009039311 A2 WO2009039311 A2 WO 2009039311A2 US 2008076905 W US2008076905 W US 2008076905W WO 2009039311 A2 WO2009039311 A2 WO 2009039311A2
Authority
WO
WIPO (PCT)
Prior art keywords
terms
content
context
list
webpage
Prior art date
Application number
PCT/US2008/076905
Other languages
French (fr)
Other versions
WO2009039311A3 (en
Inventor
Gregory Harrison
Original Assignee
Mpire Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mpire Corporation filed Critical Mpire Corporation
Publication of WO2009039311A2 publication Critical patent/WO2009039311A2/en
Publication of WO2009039311A3 publication Critical patent/WO2009039311A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising

Definitions

  • the URL entry in the database is coded as such, such that further requests from this hosting page/URL can simply be referenced against this now-preclassif ⁇ ed context to then serve appropriate content.
  • This commonly accepted model requires an incredibly large scale of database and processing power at scale because the system must maintain a list of literally every possible URL that hosts an ad placement.
  • the context value of the page is also limited by the frequency in which the hosting page/URL is re-evaluated for new content.
  • the present invention provides an alternative method of achieving contextual ad serving, without the need for this expansive infrastructure of storing every possible hosting URL, and ensuring always-current page context by evaluating the context of the hosting page/URL in real-time, for every ad impression.
  • This is achievable by reversing the model by identifying the corpus of all terms relevant to the available ad inventory (i.e. a selective set of terms) rather than attempting to evaluate the corpus of all terms residing in the hosting page/URL.
  • FIGURE 1 is a block diagram of an example system formed in accordance with an embodiment of the present invention.
  • FIGURES 2-4 are flow diagrams showing processes performed by the system components shown in FIGURE 1.
  • FIGURE 1 illustrates a network environment 20 that includes components coupled to a network 34 for performing the above described service.
  • the network environment 20 includes a web publishing system 36 that produces web content (web page(s) 38) available by user computer-based systems 40 over the network 34.
  • a server 30 with memory 32 provides ad content from the server 30 (memory 32) to a user computer-based system 40 that has accessed a web page 38 that includes an ad content request.
  • FIGURE 2 illustrates a process 60 performed by the server 30 for creating a data package that is to be sent upon request to a web browser running on the user computer-based system 40 that has received the web page 38 that includes an ad content request.
  • one or more context buckets are created.
  • the content of all the buckets is reduced, normalized and rejected based on predefined rules to produce a list.
  • a vector is created for each bucket based on the contents of each bucket and the list.
  • the list, the created vectors and an analyzer engine are included in the data package that is to be sent upon request.
  • the analyzer engine is described in FIGURE 3.
  • FIGURE 3 illustrates a process 80 performed at least partially at the user computer-based system 40 (by the analyzer engine.
  • a user requests a website (webpage 38) via a web browser running on their computer-based system 40. If the webpage 38 includes the ad request (i.e., a URL directed to the server 30), decision block 84, the data package is retrieved from the server 30 at block 88. Then, at block 90, at the web browser the analyzer engine generates a list of terms by performing normalization and rejection of terms in the webpage 38. Next, at block 92, the analyzer engine generates a webpage vector by comparing the list of terms in the webpage 38 with the list included in the data package.
  • ad request i.e., a URL directed to the server 30
  • decision block 84 the data package is retrieved from the server 30 at block 88.
  • the analyzer engine generates a list of terms by performing normalization and rejection of terms in the webpage 38.
  • the analyzer engine generates a webpage vector by
  • the analyzer engine determines which bucket vector is the closest match to the webpage vector, at block 94.
  • information related to the closest matching bucket vector is sent to the server 30.
  • the web browser receives ad content from the server that corresponds to the sent information from block 96 and the ad content is displayed in the webpage 38.
  • the CLASS method includes four basic object types: the TokenSpace, the ContextBucket, the Centroid, and the Document.
  • the ContextBucket serves as a named definition to be eventually associated with a collection of web content (ex: the advertisement content).
  • the ContextBucket has two pieces of member data: a Name and a set of n-grams, which are used as a basis for generating a Centroid.
  • the set of n-grams are descriptors for the ContextBucket.
  • the Centroid is a normalized representation of the ContextBucket. Normalization in this context is defined as one of many methods available for down- casting and/or stemming of n-grams combined with an accept/reject methodology for n-grams.
  • the TokenSpace is a union of all normalized n-grams of each Centroid, ordered by an ordering function (ex: a Latin alphabetical sort).
  • a source-document represents the content being evaluated for contextual mapping.
  • the Document represents the normalized version of the source- document that will be used for term-vector distancing against the Centroids in the TokenSpace.
  • the set of all defined ContextBuckets are iterated over and a Centroid is created for each ContextBucket.
  • the set of n-grams is iterated over and each n-gram is either accepted or rejected by a Centroid building function. Accepted n-grams are then normalized via one or more pluggable normalization providers and are then added to the Centroid.
  • One such example normalization would be keyword stemming (stemming is a process for reducing inflected (or sometimes derived) words to their stem, base or root form).
  • each Centroid is bound to the TokenSpace and a term vector is computed for each Centroid in the TokenSpace.
  • a term vector in this context is a simple list of integers corresponding to the TokenSpace, where each member of the list is equal to the count of the occurrences of the corresponding term from the TokenSpace in the provided Centroid or Document.
  • the system When the system is asked to categorize a source-document, it passes the source document to a Tokenizer.
  • the role of the Tokenizer is to present a set of n- gram candidates to a Document Builder.
  • the Tokenizer uses the same normalization and rejection functions as were configured for the generation of Centroids to process all keywords in the document. Only those normalized keywords/n-grams from the source document that exist in the TokenSpace can be represented as candidates.
  • the Document Builder then builds a Document to represent the source data.
  • the Document represents a normalized set of matching n-grams from the source- document.
  • the document source URL, n-grams, and term-vector are constructed into a Document. Once this Document is constructed, it is passed to a BucketMapper, which categorizes the Document by mapping it to the Centroids in the system. [0045] This mapping by the BucketMapper is performed by finding the Centroid with the "nearest-neighbor" term-vector to the requested Document in the TokenSpace.
  • ⁇ a b acos((a dot b)/(
  • This formula is used to calculate the angles between each Centroid and the given Document, and the Centroid with the lowest angle is chosen as the Centroid for the Document. Since the Centroid is simply a normalized version of the ContextBucket, the desired mapping from source-document to ContextBucket exists.
  • the ContextBucket can be used in association with the delivery of any desired web content.
  • all ContextBuckets can be associated with one or more pieces of ad content. Once the source-document has been mapped to a ContextBucket, the associated ad content can be delivered to the source-document. [0053] While the preferred embodiment of the invention has been illustrated and described, as noted above, many changes can be made without departing from the spirit and scope of the invention. Accordingly, the scope of the invention is not limited by the disclosure of the preferred embodiment. Instead, the invention should be determined entirely by reference to the claims that follow.

Abstract

Methods and systems of achieving contextual ad serving, without the need for this expansive infrastructure of storing every possible hosting URL, and ensuring always-current page context by evaluating the context of the hosting page/URL in real-time, for every ad impression. This is achievable by reversing the model by identifying the corpus of all terms relevant to the available ad inventory (i.e. a selective set of terms) rather than attempting to evaluate the corpus of all terms residing in the hosting page/URL.

Description

METHOD FOR A CONTEXTUAL, VECTOR-BASED CONTENT-SERVING
SYSTEM
PRIORITY CLAIM
[0001] This application claims the benefit of U.S. Provisional Application Ser. No. 60/973,393 filed September 18, 2007, the contents of which are hereby incorporated by reference.
BACKGROUND OF THE INVENTION
[0002] Most commonly-accepted methodologies relating to contextual web site analysis and ad serving conform to a specific (deferred analysis) model whereby the first request for an advertisement logs the hosting page URL to a deferred offline, queue -based system. In this traditional system, the first few ad requests are fulfilled with stock content or Public Service ads while the offline process works through a queue of all pending URLs to iteratively crawl the page and perform content analysis of the page to derive a specific contextual classification of the page/URL by examining the page content in its entirety - a very time/CPU-intensive evaluation. Once the context (and its corresponding ad content) is derived, the URL entry in the database is coded as such, such that further requests from this hosting page/URL can simply be referenced against this now-preclassifϊed context to then serve appropriate content. This commonly accepted model requires an incredibly large scale of database and processing power at scale because the system must maintain a list of literally every possible URL that hosts an ad placement. The context value of the page is also limited by the frequency in which the hosting page/URL is re-evaluated for new content.
SUMMARY OF THE INVENTION
[0003] The present invention provides an alternative method of achieving contextual ad serving, without the need for this expansive infrastructure of storing every possible hosting URL, and ensuring always-current page context by evaluating the context of the hosting page/URL in real-time, for every ad impression. This is achievable by reversing the model by identifying the corpus of all terms relevant to the available ad inventory (i.e. a selective set of terms) rather than attempting to evaluate the corpus of all terms residing in the hosting page/URL. The management of these selective set of terms (ContextBuckets), the manner in which these terms are associated with the appropriate ad content (Ad Content -> ContextBucket), and the mechanism for evaluating the hosting page/url against this set of selective terms (via the condensing of the term sets into the Tokenspace are the three primary claims in support of this filing.
BRIEF DESCRIPTION OF THE DRAWINGS
[0004] Preferred and alternative examples of the present invention are described in detail below with reference to the following drawings:
[0005] FIGURE 1 is a block diagram of an example system formed in accordance with an embodiment of the present invention; and
[0006] FIGURES 2-4 are flow diagrams showing processes performed by the system components shown in FIGURE 1.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT
[0007] The present invention provides as system and method for deriving contextually-relevant associations between groups of pre-defined content to a web page in which the content will be rendered for the purpose of delivering ad content to a web page with the highest possible context to the target page content. The invention allows highly relevant content to be delivered to a web page (i.e. advertising). [0008] FIGURE 1 illustrates a network environment 20 that includes components coupled to a network 34 for performing the above described service. The network environment 20 includes a web publishing system 36 that produces web content (web page(s) 38) available by user computer-based systems 40 over the network 34. A server 30 with memory 32 provides ad content from the server 30 (memory 32) to a user computer-based system 40 that has accessed a web page 38 that includes an ad content request.
[0009] FIGURE 2 illustrates a process 60 performed by the server 30 for creating a data package that is to be sent upon request to a web browser running on the user computer-based system 40 that has received the web page 38 that includes an ad content request. First, at block 62, one or more context buckets are created. Then, at block 64, the content of all the buckets is reduced, normalized and rejected based on predefined rules to produce a list. Next, at block 66, a vector is created for each bucket based on the contents of each bucket and the list. The list, the created vectors and an analyzer engine are included in the data package that is to be sent upon request. The analyzer engine is described in FIGURE 3.
[0010] FIGURE 3 illustrates a process 80 performed at least partially at the user computer-based system 40 (by the analyzer engine. First, at a block 82, a user requests a website (webpage 38) via a web browser running on their computer-based system 40. If the webpage 38 includes the ad request (i.e., a URL directed to the server 30), decision block 84, the data package is retrieved from the server 30 at block 88. Then, at block 90, at the web browser the analyzer engine generates a list of terms by performing normalization and rejection of terms in the webpage 38. Next, at block 92, the analyzer engine generates a webpage vector by comparing the list of terms in the webpage 38 with the list included in the data package. The analyzer engine determines which bucket vector is the closest match to the webpage vector, at block 94. At block 96, information related to the closest matching bucket vector is sent to the server 30. At block 98, the web browser receives ad content from the server that corresponds to the sent information from block 96 and the ad content is displayed in the webpage 38.
[0011] This method is herein referred to as the CLASS method. The CLASS method includes four basic object types: the TokenSpace, the ContextBucket, the Centroid, and the Document.
[0012] The ContextBucket serves as a named definition to be eventually associated with a collection of web content (ex: the advertisement content). The ContextBucket has two pieces of member data: a Name and a set of n-grams, which are used as a basis for generating a Centroid. The set of n-grams are descriptors for the ContextBucket.
[0013] The Centroid is a normalized representation of the ContextBucket. Normalization in this context is defined as one of many methods available for down- casting and/or stemming of n-grams combined with an accept/reject methodology for n-grams.
[0014] The TokenSpace is a union of all normalized n-grams of each Centroid, ordered by an ordering function (ex: a Latin alphabetical sort).
[0015] A source-document represents the content being evaluated for contextual mapping.
[0016] The Document represents the normalized version of the source- document that will be used for term-vector distancing against the Centroids in the TokenSpace.
[0017] The associations between these elements are visually represented in the FIGURE.
[0018] The CLASS method is described as follows:
[0019] At startup, the set of all defined ContextBuckets are iterated over and a Centroid is created for each ContextBucket. The set of n-grams is iterated over and each n-gram is either accepted or rejected by a Centroid building function. Accepted n-grams are then normalized via one or more pluggable normalization providers and are then added to the Centroid. One such example normalization would be keyword stemming (stemming is a process for reducing inflected (or sometimes derived) words to their stem, base or root form).
[0020] There now exists a set of "unfinished" Centroids SC = {C0...Cn}. Then, the union of all of the n-grams of each Centroid is determined. The n-grams in the union are ordered via an ordering function - typically the natural (Latin alphabetical) order of the n-grams can be used. This ordered union of all Centroids is called the TokenSpace.
[0021] Next, each Centroid is bound to the TokenSpace and a term vector is computed for each Centroid in the TokenSpace. A term vector in this context is a simple list of integers corresponding to the TokenSpace, where each member of the list is equal to the count of the occurrences of the corresponding term from the TokenSpace in the provided Centroid or Document.
[0022] As an example, assume there are two ContextBuckets:
[0023] ContextBucket 1
[0024] name: "Dogs"
[0025] n-gram candidates: "Puppy" "Labrador" "Golden Retriever"
"pet" "27" [0026] ContextBucket 2
[0027] name: "Cats"
[0028] n-gram candidates: "Kitty" "Catnip" "Litter" "Pet" "Whiskers"
[0029] Assuming the rejection function was to only accept dictionary words and the normalization function was simply the down-casting function, the following language definition is attained:
[0030] L = ["catnip", "golden retriever", "kitty", "Labrador", "litter", "pet", "puppy", "whiskers"]
[0031] Thus the following two Centroids would be cast as:
[0032] Centroid "Dogs"
[0033] n-grams: "puppy" "labrador" "golden retriever" "pet"
[0034] term-vector: [0,1,0,1,0,1,1,0]
[0035] Centroid "Cats"
[0036] n-grams: "kitty" "catnip" "litter" "pet" "whiskers"
[0037] term-vector: [1,0,1,0,1,1,0,1]
[0038] The system is then ready to accept documents for categorization/mapping against the Centroids.
[0039] When the system is asked to categorize a source-document, it passes the source document to a Tokenizer. The role of the Tokenizer is to present a set of n- gram candidates to a Document Builder. The Tokenizer uses the same normalization and rejection functions as were configured for the generation of Centroids to process all keywords in the document. Only those normalized keywords/n-grams from the source document that exist in the TokenSpace can be represented as candidates. The Document Builder then builds a Document to represent the source data. Thus, the Document represents a normalized set of matching n-grams from the source- document.
[0040] For example, if one were to attempt to categorize the contents of a (fictional) web page URL (http://www.kittylitter.com), the entirety of the page content is essentially reduced to a set of normalized n-grams derived from this document- source :
[0041] Document-source: http://www.kittylitter.com
[0042] n-grams: ["kitty", "kitty", "kitty", "catnip", "catnip", "pet",
"labrador", "pet", "whiskers", "puppy", "kitty", "litter" , "litter" , "litter"]
[0043] term-vector: [2,0,4, 1 ,4,2, 1 , 1 ]
[0044] The document source URL, n-grams, and term-vector are constructed into a Document. Once this Document is constructed, it is passed to a BucketMapper, which categorizes the Document by mapping it to the Centroids in the system. [0045] This mapping by the BucketMapper is performed by finding the Centroid with the "nearest-neighbor" term-vector to the requested Document in the TokenSpace.
[0046] Given the definition of the dot product:
[0047] a dot b = |a||b| * cos ( < a b )
[0048] Then:
[0049] < a b = acos((a dot b)/(|a||b|))
[0050] This formula is used to calculate the angles between each Centroid and the given Document, and the Centroid with the lowest angle is chosen as the Centroid for the Document. Since the Centroid is simply a normalized version of the ContextBucket, the desired mapping from source-document to ContextBucket exists.
[0051] Now that the association between the source-document (ex: http://www.kittylitter.com), its Document, and the mapped Centroid/ContextBucket have been derived, the ContextBucket can be used in association with the delivery of any desired web content.
[0052] For example, all ContextBuckets can be associated with one or more pieces of ad content. Once the source-document has been mapped to a ContextBucket, the associated ad content can be delivered to the source-document. [0053] While the preferred embodiment of the invention has been illustrated and described, as noted above, many changes can be made without departing from the spirit and scope of the invention. Accordingly, the scope of the invention is not limited by the disclosure of the preferred embodiment. Instead, the invention should be determined entirely by reference to the claims that follow.

Claims

[0054] The embodiments of the invention in which an exclusive property orprivilege is claimed are defined as follows:
1. A method for creating a webpage ad analysis tool, the method comprising: defining one or more context categories, each context category comprises one or more terms; combining the terms of all the context categories; normalizing the combined terms; reducing the normalized terms to remove redundancy; rejecting terms from the reduction based on predefined rules; generating a list of terms based on the results of the rejection; creating a vector for each context category based on the terms of each context category and the generated list; and creating deliverable data package comprising the list, the created context category vectors and an analyzer engine, wherein each context category is associated with a unique advertizing category, wherein each advertizing category includes associated ad content.
2. A method for analyzing a webpage and inserting ad content based on the analysis, the method comprising: if a webpage accessed via a web browser includes an ad request, retrieving a data package from a server associated with the ad request, the data package comprises a list of normalized, reduced, and rejected content from one or more context categories, vectors associated with each of the one or more context categories based on the list and an analyzer engine; generating a list of content by performing normalization and rejection of terms included in the accessed webpage; generating a webpage vector by comparing the generated list of content with the list included in the data package; determining the context category vector having the closest match to the webpage vector; sending information related to the determined closest matching context category vector to the server; receiving ad content from the server that corresponds to the sent information; and displaying the received ad content in the webpage.
3. A system for creating a webpage ad analysis tool, the system comprising: a means for defining one or more context categories, each context category comprises one or more terms; a means for combining the terms of all the context categories; a means for normalizing the combined terms; a means for reducing the normalized terms to remove redundancy; a means for rejecting terms from the reduction based on predefined rules; a means for generating a list of terms based on the results of the rejection; a means for creating a vector for each context category based on the terms of each context category and the generated list; and a means for creating deliverable data package comprising the list, the created context category vectors and an analyzer engine, wherein each context category is associated with a unique advertizing category, wherein each advertizing category includes associated ad content.
4. A system for analyzing a webpage and inserting ad content based on the analysis, the system comprising: if a webpage accessed via a web browser includes an ad request, a means for retrieving a data package from a server associated with the ad request, the data package comprises a list of normalized, reduced, and rejected content from one or more context categories, vectors associated with each of the one or more context categories based on the list and an analyzer engine; a means for generating a list of content by performing normalization and rejection of terms included in the accessed webpage; a means for generating a webpage vector by comparing the generated list of content with the list included in the data package; a means for determining the context category vector having the closest match to the webpage vector; a means for sending information related to the determined closest matching context category vector to the server; a means for receiving ad content from the server that corresponds to the sent information; and a means for displaying the received ad content in the webpage.
PCT/US2008/076905 2007-09-18 2008-09-18 Method for a contextual, vector-based content-serving system WO2009039311A2 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US97339307P 2007-09-18 2007-09-18
US60/973,393 2007-09-18
US98668007P 2007-11-09 2007-11-09
US60/986,680 2007-11-09

Publications (2)

Publication Number Publication Date
WO2009039311A2 true WO2009039311A2 (en) 2009-03-26
WO2009039311A3 WO2009039311A3 (en) 2009-05-22

Family

ID=40468773

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2008/076905 WO2009039311A2 (en) 2007-09-18 2008-09-18 Method for a contextual, vector-based content-serving system

Country Status (1)

Country Link
WO (1) WO2009039311A2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017008552A1 (en) * 2015-07-15 2017-01-19 腾讯科技(深圳)有限公司 Multimedia information popup window processing method and device, and computer storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040059708A1 (en) * 2002-09-24 2004-03-25 Google, Inc. Methods and apparatus for serving relevant advertisements
US20040181525A1 (en) * 2002-07-23 2004-09-16 Ilan Itzhak System and method for automated mapping of keywords and key phrases to documents
KR20040104060A (en) * 2003-06-02 2004-12-10 송재현 Linking method of related site with keyword db mining of blog contents
KR20070029389A (en) * 2005-09-09 2007-03-14 주식회사 엠퓨처 Method and system for providing advertisement service using core keyword and record media recorded prgram for realizing the same

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040181525A1 (en) * 2002-07-23 2004-09-16 Ilan Itzhak System and method for automated mapping of keywords and key phrases to documents
US20040059708A1 (en) * 2002-09-24 2004-03-25 Google, Inc. Methods and apparatus for serving relevant advertisements
KR20040104060A (en) * 2003-06-02 2004-12-10 송재현 Linking method of related site with keyword db mining of blog contents
KR20070029389A (en) * 2005-09-09 2007-03-14 주식회사 엠퓨처 Method and system for providing advertisement service using core keyword and record media recorded prgram for realizing the same

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017008552A1 (en) * 2015-07-15 2017-01-19 腾讯科技(深圳)有限公司 Multimedia information popup window processing method and device, and computer storage medium
US10579690B2 (en) 2015-07-15 2020-03-03 Tencent Technology (Shenzhen) Company Limited Multimedia information pop-up window processing method and device, and computer storage medium

Also Published As

Publication number Publication date
WO2009039311A3 (en) 2009-05-22

Similar Documents

Publication Publication Date Title
US20210103964A1 (en) Account manager virtual assistant using machine learning techniques
US10891699B2 (en) System and method in support of digital document analysis
CA2504181C (en) Verifying relevance between keywords and web site contents
RU2393533C2 (en) Offering allied terms for multisemantic inquiry
US8108204B2 (en) Text categorization using external knowledge
US7890503B2 (en) Method and system for performing secondary search actions based on primary search result attributes
US8495049B2 (en) System and method for extracting content for submission to a search engine
US7472113B1 (en) Query preprocessing and pipelining
EP1540514B1 (en) System and method for automated mapping of keywords and key phrases to documents
US20030225763A1 (en) Self-improving system and method for classifying pages on the world wide web
US20020087515A1 (en) Data acquisition system
TW200849045A (en) Web spam page classification using query-dependent data
US20050114306A1 (en) Integrated searching of multiple search sources
US20020174101A1 (en) Document retrieval system
JPH1125108A (en) Automatic extraction device for relative keyword, document retrieving device and document retrieving system using these devices
CN1494695A (en) Seamless translation system
KR20220034134A (en) Analysis of intellectual property data about products and services
US20090234794A1 (en) Method for a contextual, vector-based content-serving system
CN113221570A (en) Processing method, device, equipment and storage medium based on-line inquiry information
CN111241270A (en) Resume processing method and device
WO2009039311A2 (en) Method for a contextual, vector-based content-serving system
JP2015194955A (en) Bid information search system
CN113434760B (en) Construction method recommendation method, device, equipment and storage medium
US20230161831A1 (en) Systems and Methods for Automatic URL Identification From Data
JP5559725B2 (en) Information retrieval service providing method using web page divided into a plurality of information blocks

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 08831979

Country of ref document: EP

Kind code of ref document: A2

NENP Non-entry into the national phase in:

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 08831979

Country of ref document: EP

Kind code of ref document: A2