EP1285368A2 - Automated target-market sampler - Google Patents

Automated target-market sampler

Info

Publication number
EP1285368A2
EP1285368A2 EP01906864A EP01906864A EP1285368A2 EP 1285368 A2 EP1285368 A2 EP 1285368A2 EP 01906864 A EP01906864 A EP 01906864A EP 01906864 A EP01906864 A EP 01906864A EP 1285368 A2 EP1285368 A2 EP 1285368A2
Authority
EP
European Patent Office
Prior art keywords
tester
test
testers
market research
focus group
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP01906864A
Other languages
German (de)
French (fr)
Inventor
Jim Patterson
Misha Birman
John Lorance
Stephen Ketchpel
Joseph Tan
Mike Elliot
Mark Risher
Karen Wong
Gareth Ivatt
Maya Venkatraman
Brian Hirschfeld
Raul Duran
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vividence Corp
Original Assignee
Vividence Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vividence Corp filed Critical Vividence Corp
Publication of EP1285368A2 publication Critical patent/EP1285368A2/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling

Definitions

  • the present invention relates to target-market sampling, and more specifically to Internet hosting of automated tools that launch, collect, and report focus group, usability test, and market research, usability test, and market research studies of customer websites.
  • an automated target-market sampler embodiment of the invention comprises a website operated by a marketing-services provider.
  • Client-users and volunteer testers access the website over the Internet.
  • Such client-users have commercial websites of their own that they pay the marketing-services provider to study and report on how effective and easy they are to navigate and use.
  • the testers are rewarded for their participation in focus group, usability test, and market research, usability test, and market research type trials. Such rewards can include cash, gifts, electronic gift certificates, or other electronic cash equivalents, and either issue immediately or in batches.
  • the client-user specifies their target market and the testers provide their respective demographic and background information.
  • the marketing-services provider website registers such testers and accepts jobs from the client-users.
  • a test is launched by sending waves of invitations by e-mail to the testers according to matches between the target market description and the testers' demographic profiles.
  • a statistically accurate sample size is selected from the candidate testers who respond back to the invitation, and the selected testers then navigate the commercial website.
  • the marketing-services provider website tracks the click-paths taken by each and can ask the tester questions like what did they expect to see when a link was clicked.
  • a report is automatically generated for the client-user.
  • Fig. 1 is an automated target-market sampler embodiment of the present invention for implementation on the Internet; and Fig. 2 is a dataflow diagram of a method embodiment of the present invention.
  • Fig. 1 represents a system embodiment of the present invention, and is referred to herein by the general reference numeral 100.
  • the system 100 uses an Internet connection 102 to connect an automated target-market sampler 103 which includes a test webserver 104 and a test web-content 106.
  • an automated target-market sampler 103 which includes a test webserver 104 and a test web-content 106.
  • the object is to enlist volunteer testers to provide test-flight information about a sponsoring client's commercial website.
  • the test web-content 106 enrolls focus group, usability test, and market research, usability test, and market research candidates, e.g. "testers”, selects subgroups of testers for particular studies, designs a statistical sampling plan, collects responses, analyses the sample data, and reports the study analysis to a sponsor customer.
  • An enroll-testers module 108 allows volunteers who login over the Internet to qualify as testers. Various demographics are collected during enrollment so that the particular tester can be invited later to suitable studies.
  • An invitation-sender module 1 10 selects testers according to their demographics and invites them to join in a study. The invitations are e-mailed.
  • a sampling-planner module 112 takes in sponsoring client information and designs a test sample plan.
  • a Midas-reporter module 114 organizes the returned sample information and generates easy-to-interpret reports that the sponsoring client can use to improve their commercial website.
  • a tools module 1 16 provides a variety of useful administrative tools that are used to set demographic categories, statistical variables, etc.
  • a database 120 stores the demographic information collected from the testers, and can be used to host a mirror-site of a sponsoring client's website.
  • tester-1 A statistically varied group of volunteer testers join-in over the Internet 102. Each has a web-client and a browser, as represented by (tester-1 ) 122 and 124, (tester-2) 126 and 128, (tester-3) 130 and 132, and also, (tester-4) 134 and 136.
  • tester-1 a web-client and a browser
  • tester-2 a web-client
  • tester-2 a browser
  • a sponsoring client webserver 138 includes a web-content-under-test 140 that is visited by invited ones of tester-1 through tester-4. Such testers are sent on what amounts to a scavenger hunt at the web-content-under-test 140. The items to hunt for are common things that typical customers of the website would want to buy and should be able to find.
  • Such tracking can be facilitated by downloading the sponsoring client's web- content-under-test 140 to the database 120. Then, the testers are spoon-fed webpages one-by-one. Each navigation choice made by the tester can be intercepted by the sampling-planner module 112 and asked why that particular choice was made. The answers and the click-paths are stored in database 120 for later analysis.
  • Fig. 2 represents a method embodiment of the present invention, and is referred to herein by the general reference numeral 200.
  • the method 200 provides for automated testing according to design parameters and the demographics of invited testers.
  • the method 200 begins with a step 202 that presents a home page on the Internet through which testers can login.
  • a step 204 presents a tester home page for those who login successfully.
  • a step 206 displays the tests that are available to be taken. The tester selects one.
  • a step 208 lists the test instructions for the one selected by the tester. The test is taken, and typically lasts no longer than thirty minutes on-line.
  • a step 210 uploads the test results and stores them in a test database 212.
  • a step 214 validates that the tester has fulfilled their part of the bargain, so a step 216 can issue a reward.
  • Such rewards can include cash, gifts, electronic gift certificates, or other electronic cash equivalents for Internet shopping sites.
  • a reward may be issued immediately by e-mail, or stored for batch issuance in a reward queue 218.
  • step 206 If in step 206 it is determined the tester needs a browser, a browser-needed page is displayed in a step 220. Such browser is then downloaded in a step 222. A step 224 sends a test invitation by e-mail. If the tester forgets their password, then a step 226 presents a password assistance page and the forgotten password will be e-mailed.
  • embodiments of the present invention are automated tools, virtual factories, that can be used by business analysts, channel partners, and large customers to invite testers, monitor test and survey batteries, and validate test and survey results.
  • a characterization of a test or survey battery e.g. script, sample size, target audience, and invitation text
  • embodiments of the present invention automatically produce an exact number of validated test or survey results and generates a electronic or web-based report.
  • Embodiments of the present invention also guarantee that the reported results can be projected to the entire Internet population by sampling specific respondents according to defined demographic distributions.
  • Embodiments of the present invention launch multiple successive rounds of invitations, validate test and survey results in real-time, issue rewards, close tests, auto-populate the tester database with survey response data, and create a report. Typically with no human supervision or intervention.
  • Embodiments of the present invention and their underlying work flow processes are preferably deeply instrumented, e.g., with "peek,” “poke,” and “trap” capabilities. Subscriber alerts provide early notification of major events and exceptions via e-mail or e-mailable pager. Summary statistics and live status are available on-demand through an externally accessible, rights- managed website. Deep and rich data collection enables ex-post analysis for ongoing improvement of internal forecasting models, heuristics, and operational metrics. Such forecasting models include invitation response rate b y demographics, time of day, day of week, etc. The heuristics provide detection of fraud, sloth, and the operational metrics monitor cycle time, throughput and marginal operating costs.
  • Embodiments of the present invention preferably accommodate thousands of clients, millions of testers, and thousands of tests per week. So automation is critical. Users are freed from the rote aspects of test processing, allowing them to focus exclusively on the most intellectually challenging stages of client engagements, e.g., formulating tests, interpreting results, and communicating outcomes. Such allows the talents and resources of channel partners and large clients to be leveraged, freeing users to focus on their larger and more strategic clients. Clients typically request well-defined tester samples with specific characteristics. The major demographic attributes of any tester sample are guaranteed to exactly match those of the broader Internet population. The results of a study can thus be projected to the entire Internet population. Customers can target a specific demographic subset of the Internet population, e.g. their target market.
  • www.women.com might also want to guarantee that 100% of their tester sample are female while guaranteeing that the age, income, and internet experience of the tester sample are representative of the Internet population.
  • Each user is required to provide basic test design information, e.g. test name, sample size, demographic constraints, etc. Users may specify discrete probability distributions or fixed constraints for each tester fact, e.g. demographic, psychographic, or behavioral attribute. By default, these are preferably set to distributions that match the broader Internet population. However, the default distribution for any fact may preferably be overridden, either by providing an alternative probability distribution to represent the target audience of the site, or by specifying arbitrary constraints.
  • the tester population from which samples are drawn, may include non- overlapping tester populations quarantined for use only by specific clients.
  • Various test manager parameters are set automatically.
  • the default operating parameters can be changed by advanced users, e.g. the cycle time between successive waves of invitation e-mails, the ratio of invitations sent to results needed.
  • the typical defaults are four hours and two invitations.
  • the target-market sampler 103 is four responses short of achieving its quota among female 18-34 year-old power users, and the invite multiple is two, eight invitations are sent to testers who meet such demographic.
  • the default is 30%.
  • the mean response time is the average response latency among testers. Alternative embodiments allow different mean response time parameters for each demographic segment. The typical default is three hours.
  • Embodiments of the present invention generate a collection of validated test or survey results. Internal operations are instrumented to provide both preemptive and on-demand visibility into the state of any test battery. Successive waves of invitations targeted at appropriate tester sub-populations are adaptively launched to automatically produce an exact number of approved test results. These exactly match a user-specified demographic, psychographic, and behavioral distribution. After each test is closed the rewards are automatically generated. A Midas report is generated that includes path-analysis post processing, and is previewed and cached.
  • Testers typically initiate a test-taking session in response to an e-mail invitation. However, in some cases, they may simply login from the home page and notice that they are eligible for tests. In either case, they must have a special browser before obtaining test instructions or beginning the test. These special browsers automatically upload each test result.
  • the present invention supports ten major demographic attributes, e.g. age, gender, marital status, household income, education, primary occupation, work category, internet experience, hours online per week, and work hours online per week.
  • the present invention also supports arbitrary dimensions, and can include psychographic and behavioral attributes.
  • automated target-market sampler 103 first transforms an n- discrete probability distribution into an n-dimensional joint probability distribution.
  • the sample size is used as a multiple, rounding each cell to the nearest integer, in order to compute a sample quota.
  • One process embodiment of the present invention is completely generalizable, subject only to the operational limitations of the underlying database and system architecture. For example, the number of database queries required to launch a wave of invitations increases geometrically with the number of attributes. In general, let,
  • N number of attributes
  • the sampling plan is an n-dimensional matrix of joint probabilities. Any cell h the sampling plan may be computed as,
  • the quota is an n-dimensional matrix of integers. Any cell in the sampling plan may be computed,
  • automated target-market sampler 103 transforms N discrete probability distributions with n,...n D levels, respectively, into individual quotas for each of n, * n 2 ...n N segments of the tester population.
  • the automated target- market sampler 103 continues launching waves of invitations targeted appropriately to each individual tester segment until it achieves its quota of validated testers in each segment, or exceeds the time limit, whichever comes first.
  • automated target-market sampler 103 preferably randomly re-samples within each segment to obtain the exact number of test results specified by the quota.
  • Users may apply additional arbitrary constraints to the tester sample, using an instance of the filter tool. Specifically, users may employ “equal to”, “at least”, “at most”, and “between” operators to constrain any subset of "age”, “gender”, “marital status”, “household income”, “education”, “primary occupation”, “work category”, “Internet experience”, “hours online per week”, and “work hours online per week”.
  • Testers who are "denied”, “auto-denied”, or “pending” are preferably excluded from any sample.
  • Testers who have taken a test any time within the past ⁇ X> days are preferably excluded from any sample.
  • ⁇ X> 30. This limit does not apply to surveys, and surveys do not count toward the limit for tests.
  • Testers who have taken ⁇ Y> or more tests in the past 12 months are preferably excluded from any sample.
  • Testers who have one or more valid outstanding invitations to other tests are preferably excluded from any sample.
  • Variables, ⁇ X> and ⁇ Y> are preferably editable only by engineers, although they can be changed at any time without re-releasing the product or jeopardizing the stability of the product.
  • the automated target-market sampler 103 periodically launches "waves" of test invitations, targeted appropriately, until the sampling quota has been met in each demographic, psychographic, and behavioral segment.
  • the following program can be used to decide when and how many e-mails to send,
  • Quota a vector or matrix
  • InviteMultiple is a scalar, independent of segment, time, day.
  • lnviteMultiple[ ] becomes a matrix or other multivariate function, varying by demographic segment, time-of-day, day-of-week, and reward.
  • An "invitation-bounced" field in the tester database is typically incremented for each tester from whom e-mail messages have been returned undelivered.
  • test results uploaded by individual browsers contain ordered lists of URLs visited by their testers. Some web pages, particularly those that are dynamically-generated, use parameterized URLs containing additional information about the tester, about the user context, or about the host environment.
  • the automated target-market sampler 103 automatically post-processes the uploaded event records, logically bundling URLs that represent essentially identical pages from the perspective of the user and generating an aggregate path.
  • One process embodiment of the present invention is identical to the process for test invitations, except that the uploaded, validated test results only count towards achieving the quota if the scored answers to the underlying survey questions make a tester "eligible.”
  • the automated target-market sampler 103 preferably launches surveys on a standalone basis with no automated linkage to previous or future test batteries.
  • the surveys are handled like tests, waves of invitations are automatically launched and survey results validated until a specific quota is reached.
  • Surveys are always necessarily associated with tests, e.g. in a pre-test survey to qualify testers, or a post-test survey to measure fulfillment experience.
  • a competitive intelligence function can be included to pit two sites against each other, e.g. client vs. competitor. Half of the tester sample visits site A then site B, to avoid biasing test results. The other half visits site B then site A.
  • the automated target-market sampler 103 includes support for automating the processing of competitive intelligence tests.
  • a authoring-tool-generated test script is used for input, and two tests are created and launched, each a mirror image of the other. The equivalent of two test results is produced.
  • automated target- market sampler 103 generates and launches N(N-1 ) tests and produces the equivalent of N(N-1)/2 tests. This assumes that testers cannot be asked to visit and thoughtfully evaluate more than two sites contiguously and that the competitive intelligence module requires pairwise comparison of all sites under consideration.
  • the automated target-market sampler 103 generates electronic or web-based reports automatically upon completion of a test. If the test is a competitive intelligence test, automated target-market sampler 103 applies the competitive intelligence report template; otherwise, it uses the standard report template. After generating the report, Midas visits each page of a Midas report, causing it to be cached for future use.
  • a live "dashboard" graphical user interface provides on-demand and ex-post visibility into test conditions and progress, called views. Some views are preferably accessible to only to the administrator, while others are preferably accessible to everyone, including channel partners and self-serve direct customers.
  • the list of views includes:
  • Status Bar - Absolute Two-dimensional graph of cumulative arrivals vs. time, color-coded to indicate # invitations, # tests underway, # tests completed, and # tests validated;
  • invitation Waves Two-dimensional graph of arrivals vs. time, where arrivals are color-coded to indicate the invitation wave from which they originated;
  • Site Events One-dimensional graph of site events over time, where events are color-coded to indicate outage, brownout, major change, or other type of event; 5. Failure Rate: Two-dimensional graph of the "give-up" rate over time;
  • Approval Rate Two-dimensional graph of test result validation rate over time
  • Net Yield Two-dimensional graph of net invitation yield, expressed as a percentage, over time; and 10. Yield: Scrollable, read-only text box enumerating the invitation yield, expressed as a percentage, for all combinations of constrained demographics.
  • a site can be monitored continuously to guarantee that it is not down or overlooked.
  • Site inaccessibility, site changes, http errors, connection timeouts, and file size changes can all be detected.
  • Users can register to be notified preemptively via e-mail or numeric pager of meaningful events. These include both positive events and negative events, e.g. test launched, test complete, test delayed, target site down, etc. Also periodically polled status updates and trapped exceptions, e.g. hourly progress update, daily site info, test complete, site down.
  • Notifications are preferably made available for test launched, test delayed, test closed, target site down or inaccessible, target site slow, target site changed, site down or inaccessible, site slow, rewards inventory low, rewards inventory empty, high "give-up” rate, and periodic progress summary (status sent every n hours).
  • During registration candidate testers are automatically activated or denied immediately. The testers arrive at a thank-you page and receive a "welcome e- mail" regardless of whether their account is activated or denied. If either the first or last name are blank, the tester is labeled as "auto-denied”. If the last name is comprised of only a single Letter, the tester is labeled as "auto-denied”.
  • the tester is labeled as "auto-denied".
  • ⁇ X> 3.
  • ⁇ Y> 6.
  • the Internet domain of the e-mail address of the tester candidate is identical to an entry in the Internet domain blacklist, e.g. bob@customerinsites.com, the tester is labeled as "auto-denied”.
  • the tester is labeled as "auto-denied.” If the domain of the e-mail address is invalid or inaccessible, as indicated by a DNS lookup, the tester is labeled as "auto- denied.” If the e-mail address of the tester candidate is identical to the e-mail address of another tester already in the database, mark the duplicate entry as "auto-denied.” Ideally, upon registration, the system should instantly balk, notify the tester interactively, and offer them an opportunity to be reminded of their original username/password via e-mail.
  • the tester If the first name, last name, and password match another record already in the database, treat the new registration as a duplicate (even if the e-mail addresses are different). If the first name, last name, and IP address match another record already in the database, treat the new registration as a duplicate (even if the e-mail addresses are different). If the IP-address of the tester candidate is assigned to a domain h the Internet domain blacklist, the tester is labeled as "auto-denied.” Otherwise, the tester is labeled candidate as "auto-active.” Where, ⁇ X> and ⁇ Y> are preferably editable only by engineers, although they can be changed at any time without re-releasing the product or jeopardizing the stability of the product.
  • Testers are rewarded either in near-real-time as their results arrive or twice-daily in batch mode, e.g. site-specific gift certificates or branded gift certificates conferred by a certificate partner. In any case, rewards are preferably issued within thirty-six hours of test completion.
  • ⁇ T> is editable only b y engineers, although it can be changed at any time without re-releasing the product or jeopardizing the stability of the product.
  • Each uploaded test result contains additional data about the tester.
  • test is marked "auto-invalid.” If the tester abandoned the entire test using the "test. -stop test" menu on the browser, the test is marked “auto- invalid.” If the test result is incomplete (e.g. truncated) or corrupted, as indicated by a checksum or CRC, the test is marked “auto-invalid”. Otherwise, the test is marked "auto-valid.” Each invalid test result increments a number invalid tests field in the tester database. If Num_lnvalid_Tests exceeds ⁇ Z>, the tester is marked "auto-denied” and disallowed from participating in future tests.
  • ⁇ T>, ⁇ A>, ⁇ C>, ⁇ B>, ⁇ M>, ⁇ N>, and ⁇ Z> are preferably editable only b y engineers, and can be changed at any time without re-releasing the product or jeopardizing the stability of the product.
  • Candidate testers are preferably screened using these lists whether acquired via a registration on our home page or purchased directly from another source.
  • Each blacklist entry includes a field describing the intent of the entry and a field identifying the creator of the entry. Entries are preferably not case sensitive.
  • An Internet domain blacklist lists domains from whom the administrator, as a matter of policy, chooses not to allow testers. For example, employees or competitors of the provider are not allowed to become active testers.
  • a user may constrain the sample along any tester attribute, e.g., a specified probability distribution can be imposed on an attribute. This is useful guaranteeing that a sample is roughly representative of an online population.
  • a user may also impose a hard constraint, e.g. "at least", "at most", "between.”
  • Each attribute has implicit constituents. For example, "age” can be divided into five ranges: Under 18, 18-34, 35-49, 50-64, and 65+. Users choose the fraction of the sample represented by each discrete segment. The total must sum to 100%.

Abstract

An automated target-market sampler comprises a website operated by a marketing-services provider. Client-users and volunteer testers access the website over the Internet. Such client-users have commercial websites of their own that they will pay the marketing-services provider to study and report on how effective and easy they are to navigate and use. The testers are rewarded for their participation in focus-group type trials. Such rewards can include cash, gifts, electronic gift certificates, or other electronic cash equivalents, and either issue immediately or in batches. The client-user specifies their target market and the testers provide their respective demographic and background information. The marketing-services provider website registers such testers and accepts jobs from the client-users. A test is launched by sending waves of invitations by e-mail to the testers according to matches between the target market description and the testers' demographic profiles. A statistically accurate sample size is selected from the candidate testers who respond back to the invitation, and the selected testers then navigate the commercial website. The marketing-services provider website tracks the click-paths taken by each and can ask the tester questions like what did they expect to see when a link was clicked. At the conclusion of the test, a report is automatically generated for the client-user.

Description

AUTOMATED TARGET-MARKET SAMPLER
BACKGROUND OF THE INVENTION
TECHNICAL FIELD
The present invention relates to target-market sampling, and more specifically to Internet hosting of automated tools that launch, collect, and report focus group, usability test, and market research, usability test, and market research studies of customer websites.
DESCRIPTION OF THE PRIOR ART
The wild frontiers of the Internet are yielding to deliberate and more analytical methods of marketing, and customer-ease-of-use is receiving ever greater attention. Some of the first commercial websites were difficult to use and their operators had no clue that their lack of Internet sales was due mainly to poor website design. Others, more successful, put up Internet websites that were easy-to-use, and were wildly successful, e.g. Amazon and eBay.
In traditional marketing, new products are tested with focus group, usability test, and market research. The members of which are selected according to their respective demographics make-up. Statistical inferences are then drawn from how the focus group, usability test, and market research reacts to a new product for a general target population. Product flaws can be discovered early when the fixes are inexpensive, or total failures can be pulled before the failure assumes titanic proportions.
Focus-group product-analysis studies have always been expensive, slow, and not scaleable to large numbers of tests. Such have therefore been traditionally reserved to mass-produced products like cars, toothpaste, new food items, and telephones. Now that a new way of doing business has arrived with the Internet, an automated method of testing websites and the products they offer is needed.
SUMMARY OF THE INVENTION
Briefly, an automated target-market sampler embodiment of the invention comprises a website operated by a marketing-services provider. Client-users and volunteer testers access the website over the Internet. Such client-users have commercial websites of their own that they pay the marketing-services provider to study and report on how effective and easy they are to navigate and use. The testers are rewarded for their participation in focus group, usability test, and market research, usability test, and market research type trials. Such rewards can include cash, gifts, electronic gift certificates, or other electronic cash equivalents, and either issue immediately or in batches. The client-user specifies their target market and the testers provide their respective demographic and background information. The marketing-services provider website registers such testers and accepts jobs from the client-users. A test is launched by sending waves of invitations by e-mail to the testers according to matches between the target market description and the testers' demographic profiles. A statistically accurate sample size is selected from the candidate testers who respond back to the invitation, and the selected testers then navigate the commercial website. The marketing-services provider website tracks the click-paths taken by each and can ask the tester questions like what did they expect to see when a link was clicked. At the conclusion of the test, a report is automatically generated for the client-user.
BRIEF DESCRIPTION OF THE DRAWINGS
Fig. 1 is an automated target-market sampler embodiment of the present invention for implementation on the Internet; and Fig. 2 is a dataflow diagram of a method embodiment of the present invention. DETAILED DESCRIPTION OF THE INVENTION
Fig. 1 represents a system embodiment of the present invention, and is referred to herein by the general reference numeral 100. The system 100 uses an Internet connection 102 to connect an automated target-market sampler 103 which includes a test webserver 104 and a test web-content 106. Such, for example, includes Microsoft WINDOWS-NT with IIS and ASP. The object is to enlist volunteer testers to provide test-flight information about a sponsoring client's commercial website.
The test web-content 106 enrolls focus group, usability test, and market research, usability test, and market research candidates, e.g. "testers", selects subgroups of testers for particular studies, designs a statistical sampling plan, collects responses, analyses the sample data, and reports the study analysis to a sponsor customer. An enroll-testers module 108 allows volunteers who login over the Internet to qualify as testers. Various demographics are collected during enrollment so that the particular tester can be invited later to suitable studies. An invitation-sender module 1 10 selects testers according to their demographics and invites them to join in a study. The invitations are e-mailed. A sampling-planner module 112 takes in sponsoring client information and designs a test sample plan. A Midas-reporter module 114 organizes the returned sample information and generates easy-to-interpret reports that the sponsoring client can use to improve their commercial website. A tools module 1 16 provides a variety of useful administrative tools that are used to set demographic categories, statistical variables, etc. A database 120 stores the demographic information collected from the testers, and can be used to host a mirror-site of a sponsoring client's website.
A statistically varied group of volunteer testers join-in over the Internet 102. Each has a web-client and a browser, as represented by (tester-1 ) 122 and 124, (tester-2) 126 and 128, (tester-3) 130 and 132, and also, (tester-4) 134 and 136. In actual practice, thousands of testers could be enrolled and used dozens of simultaneous studies. Incentives and other "rewards" are used to persuade the testers to keep participating.
A sponsoring client webserver 138 includes a web-content-under-test 140 that is visited by invited ones of tester-1 through tester-4. Such testers are sent on what amounts to a scavenger hunt at the web-content-under-test 140. The items to hunt for are common things that typical customers of the website would want to buy and should be able to find.
How the testers navigate the sponsoring client's website is tracked and recorded. The "click-paths" used can reveal construction problems that need to be fixed. Failures by the testers to find particular for-sale items that are listed also will indicate that the website has problems that need correction.
Such tracking can be facilitated by downloading the sponsoring client's web- content-under-test 140 to the database 120. Then, the testers are spoon-fed webpages one-by-one. Each navigation choice made by the tester can be intercepted by the sampling-planner module 112 and asked why that particular choice was made. The answers and the click-paths are stored in database 120 for later analysis.
Fig. 2 represents a method embodiment of the present invention, and is referred to herein by the general reference numeral 200. The method 200 provides for automated testing according to design parameters and the demographics of invited testers. The method 200 begins with a step 202 that presents a home page on the Internet through which testers can login. A step 204 presents a tester home page for those who login successfully. A step 206 displays the tests that are available to be taken. The tester selects one. A step 208 lists the test instructions for the one selected by the tester. The test is taken, and typically lasts no longer than thirty minutes on-line. A step 210 uploads the test results and stores them in a test database 212. A step 214 validates that the tester has fulfilled their part of the bargain, so a step 216 can issue a reward. Such rewards can include cash, gifts, electronic gift certificates, or other electronic cash equivalents for Internet shopping sites. A reward may be issued immediately by e-mail, or stored for batch issuance in a reward queue 218.
If in step 206 it is determined the tester needs a browser, a browser-needed page is displayed in a step 220. Such browser is then downloaded in a step 222. A step 224 sends a test invitation by e-mail. If the tester forgets their password, then a step 226 presents a password assistance page and the forgotten password will be e-mailed.
In general, embodiments of the present invention are automated tools, virtual factories, that can be used by business analysts, channel partners, and large customers to invite testers, monitor test and survey batteries, and validate test and survey results. Given a characterization of a test or survey battery (e.g. script, sample size, target audience, and invitation text), embodiments of the present invention automatically produce an exact number of validated test or survey results and generates a electronic or web-based report. Embodiments of the present invention also guarantee that the reported results can be projected to the entire Internet population by sampling specific respondents according to defined demographic distributions.
Embodiments of the present invention launch multiple successive rounds of invitations, validate test and survey results in real-time, issue rewards, close tests, auto-populate the tester database with survey response data, and create a report. Typically with no human supervision or intervention.
Embodiments of the present invention and their underlying work flow processes are preferably deeply instrumented, e.g., with "peek," "poke," and "trap" capabilities. Subscriber alerts provide early notification of major events and exceptions via e-mail or e-mailable pager. Summary statistics and live status are available on-demand through an externally accessible, rights- managed website. Deep and rich data collection enables ex-post analysis for ongoing improvement of internal forecasting models, heuristics, and operational metrics. Such forecasting models include invitation response rate b y demographics, time of day, day of week, etc. The heuristics provide detection of fraud, sloth, and the operational metrics monitor cycle time, throughput and marginal operating costs.
In prior art business models, business analysts, tester support staff, and their engineering teams spend several hours per test characterizing each target demographic, launching invitations, monitoring tests, issuing rewards, and validating test results. Such processes are not scaleable.
Embodiments of the present invention preferably accommodate thousands of clients, millions of testers, and thousands of tests per week. So automation is critical. Users are freed from the rote aspects of test processing, allowing them to focus exclusively on the most intellectually challenging stages of client engagements, e.g., formulating tests, interpreting results, and communicating outcomes. Such allows the talents and resources of channel partners and large clients to be leveraged, freeing users to focus on their larger and more strategic clients. Clients typically request well-defined tester samples with specific characteristics. The major demographic attributes of any tester sample are guaranteed to exactly match those of the broader Internet population. The results of a study can thus be projected to the entire Internet population. Customers can target a specific demographic subset of the Internet population, e.g. their target market.
For example, www.women.com might also want to guarantee that 100% of their tester sample are female while guaranteeing that the age, income, and internet experience of the tester sample are representative of the Internet population. Each user is required to provide basic test design information, e.g. test name, sample size, demographic constraints, etc. Users may specify discrete probability distributions or fixed constraints for each tester fact, e.g. demographic, psychographic, or behavioral attribute. By default, these are preferably set to distributions that match the broader Internet population. However, the default distribution for any fact may preferably be overridden, either by providing an alternative probability distribution to represent the target audience of the site, or by specifying arbitrary constraints.
The tester population, from which samples are drawn, may include non- overlapping tester populations quarantined for use only by specific clients. Various test manager parameters are set automatically. The default operating parameters can be changed by advanced users, e.g. the cycle time between successive waves of invitation e-mails, the ratio of invitations sent to results needed. The typical defaults are four hours and two invitations.
For example, if the target-market sampler 103 is four responses short of achieving its quota among female 18-34 year-old power users, and the invite multiple is two, eight invitations are sent to testers who meet such demographic.
The expected fraction of testers who would eventually respond if the test were never closed is the "invitation yield", and is approximated using historical data.
The default is 30%. The mean response time is the average response latency among testers. Alternative embodiments allow different mean response time parameters for each demographic segment. The typical default is three hours.
Embodiments of the present invention generate a collection of validated test or survey results. Internal operations are instrumented to provide both preemptive and on-demand visibility into the state of any test battery. Successive waves of invitations targeted at appropriate tester sub-populations are adaptively launched to automatically produce an exact number of approved test results. These exactly match a user-specified demographic, psychographic, and behavioral distribution. After each test is closed the rewards are automatically generated. A Midas report is generated that includes path-analysis post processing, and is previewed and cached.
Testers typically initiate a test-taking session in response to an e-mail invitation. However, in some cases, they may simply login from the home page and notice that they are eligible for tests. In either case, they must have a special browser before obtaining test instructions or beginning the test. These special browsers automatically upload each test result.
Two distinct types of targeting can be used. Each user specifies a required sample frequency distribution along each major demographic attribute ("facts"). The present invention supports ten major demographic attributes, e.g. age, gender, marital status, household income, education, primary occupation, work category, internet experience, hours online per week, and work hours online per week. The present invention also supports arbitrary dimensions, and can include psychographic and behavioral attributes.
Guaranteed distributions are available in the test results. Users specify a sample size, e.g. n=200, and the exact frequency distributions for each of several demographic, psychographic, and behavioral attributes. By default, such distributions describe the online population (except that "age" excludes individuals under 18 years old by default). Such may be changed along one or more dimensions to generate a sample representative target audience of a client site. Samples contain precisely "n" validated test results with the specified distribution along each dimension.
In one embodiment automated target-market sampler 103 first transforms an n- discrete probability distribution into an n-dimensional joint probability distribution. The sample size is used as a multiple, rounding each cell to the nearest integer, in order to compute a sample quota.
One process embodiment of the present invention is completely generalizable, subject only to the operational limitations of the underlying database and system architecture. For example, the number of database queries required to launch a wave of invitations increases geometrically with the number of attributes. In general, let,
N = number of attributes
Lj = level of attribute n = number of possible levels of attribute p,j = probability associated with level I of attribute s = sample size s = sample size
For any attribute, the sum of the probabilities of all of the levels must equal one,
T ^ = l , for all a < j < N. (1)
;= 1
The sampling plan is an n-dimensional matrix of joint probabilities. Any cell h the sampling plan may be computed as,
N ..L,. .L, = PLl . PL,.2 - - - PLh .N = T\ PLJ.l ( )
/ = 1
Similarly, the quota is an n-dimensional matrix of integers. Any cell in the sampling plan may be computed,
N
QLI.L: ./., = S - JLI.L2. .!.,, = S - HPL,., (3)
/=■
In general, automated target-market sampler 103 transforms N discrete probability distributions with n,...nD levels, respectively, into individual quotas for each of n, * n2...nN segments of the tester population. The automated target- market sampler 103 continues launching waves of invitations targeted appropriately to each individual tester segment until it achieves its quota of validated testers in each segment, or exceeds the time limit, whichever comes first.
When the test finishes, some individual segments may contain more than the required number of test results. In this case, automated target-market sampler 103 preferably randomly re-samples within each segment to obtain the exact number of test results specified by the quota.
There are an infinite number of joint probability distributions that can be generated from n-discrete probability distributions, depending on the degree of correlation among the attributes. One process embodiment of the present invention, assumes that the attributes are not highly correlated. If this assumption is true, the sample is highly representative of the population. If not, the sample may be slightly biased.
Users may apply additional arbitrary constraints to the tester sample, using an instance of the filter tool. Specifically, users may employ "equal to", "at least", "at most", and "between" operators to constrain any subset of "age", "gender", "marital status", "household income", "education", "primary occupation", "work category", "Internet experience", "hours online per week", and "work hours online per week".
By default, several constraints are preferably automatically applied to samples. Testers who are "denied", "auto-denied", or "pending" are preferably excluded from any sample. Testers who have taken a test any time within the past <X> days are preferably excluded from any sample. By default, <X> = 30. This limit does not apply to surveys, and surveys do not count toward the limit for tests. Testers who have taken <Y> or more tests in the past 12 months are preferably excluded from any sample. By default, <Y> = 6, applies to tests only. This limit does not apply to surveys, and surveys do not count toward the limit for tests. Testers who have one or more valid outstanding invitations to other tests are preferably excluded from any sample. Variables, <X> and <Y>, are preferably editable only by engineers, although they can be changed at any time without re-releasing the product or jeopardizing the stability of the product.
The automated target-market sampler 103 periodically launches "waves" of test invitations, targeted appropriately, until the sampling quota has been met in each demographic, psychographic, and behavioral segment. The following program can be used to decide when and how many e-mails to send,
While (Quota [ ] not Met) Do For each Segment in Quota [ ]
Let Demand Max (0, Quota [Segment] — NumValidatedTesters [Segment]) launch (Demand * InviteMultiplef ]) emails targeted at Segment
Next Segment Wait until CycleTime elapses End While Both implicit and user-defined constraints must be applied when querying to obtain testers within a specific demographic segment.
Quota, a vector or matrix, is computed using a sampling plan creation process. For the present invention, InviteMultiple is a scalar, independent of segment, time, day. For the present invention, extensive empirical data on response rates is acquired, lnviteMultiple[ ] becomes a matrix or other multivariate function, varying by demographic segment, time-of-day, day-of-week, and reward. An "invitation-bounced" field in the tester database is typically incremented for each tester from whom e-mail messages have been returned undelivered.
In a path analysis post-processing process, test results uploaded by individual browsers contain ordered lists of URLs visited by their testers. Some web pages, particularly those that are dynamically-generated, use parameterized URLs containing additional information about the tester, about the user context, or about the host environment. The automated target-market sampler 103 automatically post-processes the uploaded event records, logically bundling URLs that represent essentially identical pages from the perspective of the user and generating an aggregate path. One process embodiment of the present invention is identical to the process for test invitations, except that the uploaded, validated test results only count towards achieving the quota if the scored answers to the underlying survey questions make a tester "eligible."
The automated target-market sampler 103 preferably launches surveys on a standalone basis with no automated linkage to previous or future test batteries. The surveys are handled like tests, waves of invitations are automatically launched and survey results validated until a specific quota is reached. Surveys are always necessarily associated with tests, e.g. in a pre-test survey to qualify testers, or a post-test survey to measure fulfillment experience. A competitive intelligence function can be included to pit two sites against each other, e.g. client vs. competitor. Half of the tester sample visits site A then site B, to avoid biasing test results. The other half visits site B then site A. The automated target-market sampler 103 includes support for automating the processing of competitive intelligence tests. A authoring-tool-generated test script is used for input, and two tests are created and launched, each a mirror image of the other. The equivalent of two test results is produced. In general, with N competitors, automated target- market sampler 103 generates and launches N(N-1 ) tests and produces the equivalent of N(N-1)/2 tests. This assumes that testers cannot be asked to visit and thoughtfully evaluate more than two sites contiguously and that the competitive intelligence module requires pairwise comparison of all sites under consideration.
The automated target-market sampler 103 generates electronic or web-based reports automatically upon completion of a test. If the test is a competitive intelligence test, automated target-market sampler 103 applies the competitive intelligence report template; otherwise, it uses the standard report template. After generating the report, Midas visits each page of a Midas report, causing it to be cached for future use.
A live "dashboard" graphical user interface provides on-demand and ex-post visibility into test conditions and progress, called views. Some views are preferably accessible to only to the administrator, while others are preferably accessible to everyone, including channel partners and self-serve direct customers. The list of views includes:
1. Status Bar - Percentage: One-dimensional graph of test progress, shown as % complete;
2. Status Bar - Absolute: Two-dimensional graph of cumulative arrivals vs. time, color-coded to indicate # invitations, # tests underway, # tests completed, and # tests validated;
3. Invitation Waves: Two-dimensional graph of arrivals vs. time, where arrivals are color-coded to indicate the invitation wave from which they originated;
4. Site Events: One-dimensional graph of site events over time, where events are color-coded to indicate outage, brownout, major change, or other type of event; 5. Failure Rate: Two-dimensional graph of the "give-up" rate over time;
6. Approval Rate: Two-dimensional graph of test result validation rate over time;
7. Emerging Demographics: Bar graph indicating an emerging one- dimensional discrete probability distribution for a specific demographic. If the user specified a required probability distribution for this demographic, the graph is superimposed upon another (grayed out) bar graph indicating the requirement. This graph is available for all major tester facts; 8. Log: Scrollable, read-only text box listing all major "events". Each is time-stamped;
9. Net Yield: Two-dimensional graph of net invitation yield, expressed as a percentage, over time; and 10. Yield: Scrollable, read-only text box enumerating the invitation yield, expressed as a percentage, for all combinations of constrained demographics.
A site can be monitored continuously to guarantee that it is not down or overlooked. Site inaccessibility, site changes, http errors, connection timeouts, and file size changes can all be detected. Users can register to be notified preemptively via e-mail or numeric pager of meaningful events. These include both positive events and negative events, e.g. test launched, test complete, test delayed, target site down, etc. Also periodically polled status updates and trapped exceptions, e.g. hourly progress update, daily site info, test complete, site down. Notifications are preferably made available for test launched, test delayed, test closed, target site down or inaccessible, target site slow, target site changed, site down or inaccessible, site slow, rewards inventory low, rewards inventory empty, high "give-up" rate, and periodic progress summary (status sent every n hours). During registration candidate testers are automatically activated or denied immediately. The testers arrive at a thank-you page and receive a "welcome e- mail" regardless of whether their account is activated or denied. If either the first or last name are blank, the tester is labeled as "auto-denied". If the last name is comprised of only a single Letter, the tester is labeled as "auto-denied". If either the first or last name contain a series of <X> or more repeated letters (e.g. "aaaaa"), the tester is labeled as "auto-denied". By default, <X> = 3. If either the first or last name contain a series of <Y> or more consonants in a row (e.g. "sdflkjds", the tester is labeled as "auto-denied". By default, <Y> = 6. If the Internet domain of the e-mail address of the tester candidate is identical to an entry in the Internet domain blacklist, e.g. bob@customerinsites.com, the tester is labeled as "auto-denied". If the first and last name of the tester candidate match an entry in the named individuals blacklist, e.g. Mickey Mouse, the tester is labeled as "auto-denied." If the domain of the e-mail address is invalid or inaccessible, as indicated by a DNS lookup, the tester is labeled as "auto- denied." If the e-mail address of the tester candidate is identical to the e-mail address of another tester already in the database, mark the duplicate entry as "auto-denied." Ideally, upon registration, the system should instantly balk, notify the tester interactively, and offer them an opportunity to be reminded of their original username/password via e-mail. If the first name, last name, and password match another record already in the database, treat the new registration as a duplicate (even if the e-mail addresses are different). If the first name, last name, and IP address match another record already in the database, treat the new registration as a duplicate (even if the e-mail addresses are different). If the IP-address of the tester candidate is assigned to a domain h the Internet domain blacklist, the tester is labeled as "auto-denied." Otherwise, the tester is labeled candidate as "auto-active." Where, <X> and <Y> are preferably editable only by engineers, although they can be changed at any time without re-releasing the product or jeopardizing the stability of the product.
Testers are rewarded either in near-real-time as their results arrive or twice-daily in batch mode, e.g. site-specific gift certificates or branded gift certificates conferred by a certificate partner. In any case, rewards are preferably issued within thirty-six hours of test completion.
A reward issuance process tests if the tester spent more than <T> minutes on the test, they are preferably automatically rewarded. By default, <T> = 3 minutes. If a rewarded tester has not yet selected a reward, they are preferably sent a form e-mail inviting them to choose an award. Otherwise, their entry in the tester database is marked "auto-denied", preventing them from being invited from future tests. They receive no reward. Where, <T> is editable only b y engineers, although it can be changed at any time without re-releasing the product or jeopardizing the stability of the product.
Each uploaded test result contains additional data about the tester. The automated target-market sampler 103 updates the tester database to reflect this. Specifically, the fields are updated in the database containing the IP address, NIC address, and serial number of the C: drive. Tests are preferably tabulated and validated in near-real-time as they arrive. The validation process tests if the total time spent on the test is less than <T>, the result is marked "auto-invalid." By default, <T> = 3 minutes. If the time spent on any one objective is less than <A>, the test is marked "auto-invalid." By default, <A> = 3 minutes. If, throughout the course of the entire test, the tester submitted fewer than <C> characters worth of comments, the test is marked "auto-invalid." By default, <C> = 600. If a tester completes any objective in <B> steps or fewer, the test is marked "auto-invalid." By default, <B> = 1. If a tester chooses "give-up" on any objective without first spending <M> minutes and clicking through at least <N> URL-steps, the test is marked "auto-invalid." By default, <M> = 2 and <N> = 5. If, during the active test period, embodiments of the present invention site monitoring software identified the target site was judged to be either down, unreliable, or extremely slow, the test is marked "auto-invalid." If the tester abandoned the entire test using the "test. -stop test..." menu on the browser, the test is marked "auto- invalid." If the test result is incomplete (e.g. truncated) or corrupted, as indicated by a checksum or CRC, the test is marked "auto-invalid". Otherwise, the test is marked "auto-valid." Each invalid test result increments a number invalid tests field in the tester database. If Num_lnvalid_Tests exceeds <Z>, the tester is marked "auto-denied" and disallowed from participating in future tests. Where, <T>, <A>, <C>, <B>, <M>, <N>, and <Z> are preferably editable only b y engineers, and can be changed at any time without re-releasing the product or jeopardizing the stability of the product.
Various small databases are preferably used to protect the quality and integrity of the tester population. Candidate testers are preferably screened using these lists whether acquired via a registration on our home page or purchased directly from another source. Each blacklist entry includes a field describing the intent of the entry and a field identifying the creator of the entry. Entries are preferably not case sensitive.
An Internet domain blacklist lists domains from whom the administrator, as a matter of policy, chooses not to allow testers. For example, employees or competitors of the provider are not allowed to become active testers.
Similarly, individuals with certain names are disallowed from becoming active testers, either because they are likely to be bogus or because they are affiliated with a major competitor. A user may constrain the sample along any tester attribute, e.g., a specified probability distribution can be imposed on an attribute. This is useful guaranteeing that a sample is roughly representative of an online population. A user may also impose a hard constraint, e.g. "at least", "at most", "between."
Each attribute has implicit constituents. For example, "age" can be divided into five ranges: Under 18, 18-34, 35-49, 50-64, and 65+. Users choose the fraction of the sample represented by each discrete segment. The total must sum to 100%.
User interfaces that allow users to directly manipulate the height of individual bars using a mouse would be desirable.
Although the invention is preferably described herein with reference to the preferred embodiment, one skilled in the art will readily appreciate that other applications may be substituted for those set forth herein without departing from the spirit and scope of the present invention. Accordingly, the invention should only be limited by the Claims included below.

Claims

1. A market surveying method, comprising the steps of: registering a user having a target website on the Internet that is to be tested and collecting test-constraint information; registering a sample of testers that are each connected to the Internet, and collecting demographic information about each tester; constructing a candidate focus group, usability test, and market research, usability test, and market research from a sub-set of said plurality of testers; inviting each tester member of said candidate focus group, usability test, and market research, usability test, and market research to participate in a test of said target website; logging each said tester who was invited to participate in said test, and who volunteered in response, into a final focus group, usability test, and market research, usability test, and market research with controlled numbers of testers with controlled demographics; tracking the way each tester navigates through said target website, and asking at least one question related to why a tester chose a particular click-path, and storing any information gathered into a results log; rewarding each tester-member of said final focus group, usability test, and market research for participating in a test; analyzing a test result statistic from said result log; and reporting said test result statistic to said user.
2. The method of claim 1 , wherein: the step of reporting includes charging said user a fee for a report that analyzes usage experiences and reactions of the final focus group, usability test, and market research to said target website.
3. The method of claim 1 , wherein: the step of inviting each tester-member of said candidate focus group, usability test, and market research is done via e-mail message on the Internet.
4. The method of claim 1 , wherein: the step of inviting each tester-member of said candidate focus group, usability test, and market research is done in waves of invitations where an excess number of said testers are invited given a pre-determined response rate.
5. The method of claim 1 , wherein: the step of registering a plurality of testers is such that bogus registrations are refused.
6. The method of claim 1 , further comprising the step of: collecting a heuristic to detect at least one of fraud or sloth by a user or a tester.
7. The method of claim 1 , further comprising the step of: forecasting an invitation response rate by at least one of tester demographics, time-of-day, and day-of-week.
8. The method of claim 1 , further comprising the step of: computing an operational metric that includes at least one of cycle time, throughput rate, and marginal operating costs.
9. The method of claim 1 , further comprising the step of: ex-post analysis of said test result statistic for forecasts, heuristics, and operational metrics.
10. The method of claim 1 , wherein: the step of inviting each tester-member of said candidate focus group, usability test, and market research is done in continuous and adaptively launched successive waves that end when at least one of a target sample size and demographic distribution is reached.
11. The method of claim 1 , wherein: the step of inviting each tester-member of said candidate focus group, usability test, and market research is done in continuous and adaptively launched successive waves that end when a time-limit has passed.
12. The method of claim 1 , wherein: the step of inviting each tester-member of said candidate focus group, usability test, and market research provides a particular discrete probability distribution of said testers participating in said test.
13. The method of claim 1 , wherein: the step of inviting each tester-member of said candidate focus group, usability test, and market research provides a particular demographic distribution of said testers participating in said test.
14. The method of claim 1 , wherein: the step of inviting each tester-member of said candidate focus group, usability test, and market research includes reserving and quarantining a distinct tester population for use by a particular user.
15. The method of claim 1 , wherein: the step of registering said user is such that said test-constraint information includes at least one of a test name, a sample size, a tester-attribute distribution, tester population name, test script, invitation text, and deadline.
16. The method of claim 1 , further comprising the step of: allowing a user to adjust operational parameters for at least one of cycle time, invitation multiples, invitation yield, and mean response time.
17. The method of claim 1 , wherein: the step of registering a plurality of testers includes downloading to them a special browser that monitors and reports how it is being manipulated during said test to a marketing service provider.
18. The method of claim 1 , wherein: the steps of registering a user, registering a plurality of testers, constructing a candidate focus group, usability test, and market research, inviting each tester member, logging each said tester, tracking the way each tester navigates, rewarding each tester-member, analyzing a test result statistic, and reporting said test result statistic, are all provided by a single marketing service provider.
19. The method of claim 1 , further comprising the step of: communicating from a marketing service provider website on the Internet for each of the steps of registering a user, registering a plurality of testers, constructing a candidate focus group, usability test, and market research, inviting each tester member, logging each said tester, tracking the way each tester navigates, rewarding each tester-member, analyzing a test result statistic, and reporting said test result statistic.
20. An automated target-market sampler, comprising: a marketing-services provider website operated by a marketing-services provider, and connected such that a plurality of client-users and volunteer testers can access the website over the Internet, and wherein such client-users have commercial websites of their own that they will pay said marketing-services provider to study and report on how effective and easy they are to navigate and use; and wherein said testers are rewarded for their participation in a focus-group type trial; wherein, said client-user specifies a target market and said testers are required to provide their respective demographic and background information; wherein, the marketing-services provider website registers such testers and accepts jobs from the client-users; wherein, a test is launched by sending waves of invitations by e-mail to said testers according to matches determined between a target market description and a plurality of tester-demographic profiles; wherein, a statistically accurate sample size is selected from candidate testers who respond back to said invitation, and a selected sample group of testers then navigate said commercial website; wherein, the marketing-services provider website tracks any click-paths taken by each tester and can ask a question about why a certain click was made; and wherein, a report is automatically generated for the client-user at the conclusion of the test.
21. The method of Claim 1 , wherein: the step of inviting each tester-member of said candidate focus group, usability test, and market research is done via automated telephone call.
22. The method of Claim 1 , wherein: the step of inviting each tester-member of said candidate focus group, usability test, and market research is done via automated mass postal mailing.
23. The method of Claim 1 , wherein: the step of inviting each tester-member of said candidate focus group, usability test, and market research is done via Internet instant messaging protocol.
24. The method of Claim 1 , wherein: the step of inviting each tester-member of said candidate focus group, usability test, and market research is done via client software-enabled interruption of work in progress and instantaneous notification.
25. The method of Claim 1 , further comprising the step of: directing the testing methodology to the websites of one or more competitors of client-user.
EP01906864A 2000-03-23 2001-02-01 Automated target-market sampler Withdrawn EP1285368A2 (en)

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US19169700P 2000-03-23 2000-03-23
US191697P 2000-03-23
US58863000A 2000-06-06 2000-06-06
US588630 2000-06-06
PCT/US2001/003277 WO2001071535A2 (en) 2000-03-23 2001-02-01 Automated target-market sampler

Publications (1)

Publication Number Publication Date
EP1285368A2 true EP1285368A2 (en) 2003-02-26

Family

ID=26887298

Family Applications (1)

Application Number Title Priority Date Filing Date
EP01906864A Withdrawn EP1285368A2 (en) 2000-03-23 2001-02-01 Automated target-market sampler

Country Status (3)

Country Link
EP (1) EP1285368A2 (en)
AU (1) AU2001234724A1 (en)
WO (1) WO2001071535A2 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9111298B2 (en) 2011-03-08 2015-08-18 Affinova, Inc. System and method for concept development
US9208132B2 (en) 2011-03-08 2015-12-08 The Nielsen Company (Us), Llc System and method for concept development with content aware text editor

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103019923B (en) * 2011-09-22 2015-10-21 腾讯科技(深圳)有限公司 The method and system of simulation hit testing
CN107743085B (en) * 2016-09-05 2019-11-15 腾讯科技(深圳)有限公司 Invite code management method and invitation code managing device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0944002A1 (en) * 1998-03-18 1999-09-22 SONY EUROPE GmbH User profile substystem
US5960409A (en) * 1996-10-11 1999-09-28 Wexler; Daniel D. Third-party on-line accounting system and method therefor

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5960409A (en) * 1996-10-11 1999-09-28 Wexler; Daniel D. Third-party on-line accounting system and method therefor
EP0944002A1 (en) * 1998-03-18 1999-09-22 SONY EUROPE GmbH User profile substystem

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ALLEN; KANIA; YAECKEL: "Guide To One-To-One Web Marketing", 1998, ROBERT IPSEN, USA *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9111298B2 (en) 2011-03-08 2015-08-18 Affinova, Inc. System and method for concept development
US9208515B2 (en) 2011-03-08 2015-12-08 Affinnova, Inc. System and method for concept development
US9208132B2 (en) 2011-03-08 2015-12-08 The Nielsen Company (Us), Llc System and method for concept development with content aware text editor
US9218614B2 (en) 2011-03-08 2015-12-22 The Nielsen Company (Us), Llc System and method for concept development
US9262776B2 (en) 2011-03-08 2016-02-16 The Nielsen Company (Us), Llc System and method for concept development

Also Published As

Publication number Publication date
WO2001071535A2 (en) 2001-09-27
WO2001071535A8 (en) 2002-08-01
AU2001234724A1 (en) 2001-10-03

Similar Documents

Publication Publication Date Title
US8996437B2 (en) Smart survey with progressive discovery
US7698162B2 (en) Customer satisfaction system and method
US9344519B2 (en) Receiving and correlation of user choices to facilitate recommendations for peer-to-peer connections
US10108919B2 (en) Multi-variable assessment systems and methods that evaluate and predict entrepreneurial behavior
AU2010254225B2 (en) Measuring impact of online advertising campaigns
US8805717B2 (en) Method and system for improving performance of customer service representatives
US20110178851A1 (en) Enhancing virally-marketed facilities
US20080208644A1 (en) Apparatus and Method for Measuring Service Performance
WO2008086442A2 (en) Methods and systems for measuring online chat performance
Von Gaudecker et al. Experts in experiments: How selection matters for estimated distributions of risk preferences
US20040205184A1 (en) E-business operations measurements reporting
US20230368226A1 (en) Systems and methods for improved user experience participant selection
WO2001071535A2 (en) Automated target-market sampler
Adhy et al. Usability testing of weather monitoring on android application
Kamble et al. The Square Root Agreement Rule for Incentivizing Truthful Feedback on Online Platforms
Cheruy et al. OSS popularity: Understanding the relationship between user-developer interaction, market potential and development stage
Buhaljoti Identifying key factors affecting customer’s decision-making of internet service providers in Albania
KR20090012507A (en) Apparatus and method for providing question and answer services
KR20060097288A (en) Research system using internet messenger
Van Kuijk et al. Usability in product development practice: After sales information as feedback
Moss et al. Using Market Research Panels for Behavioral Science: An Overview and Tutorial
KR20030048991A (en) Eelectronic Marketing System Capable of Leading Sales and Method thereof
Pfeiffer et al. Incentivized social sharing: Characteristics and optimization
KR20020006065A (en) Method for field-testing products in internet
Waikar et al. How Can internet service Providers tap into the Potentially-lucrative small business Market?

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20021021

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE TR

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE TR

AX Request for extension of the european patent

Extension state: AL LT LV MK RO SI

RIN1 Information on inventor provided before grant (corrected)

Inventor name: DURAN, RAUL

Inventor name: HIRSCHFELD, BRIAN

Inventor name: VENKATRAMAN, MAYA

Inventor name: IVATT, GARETH

Inventor name: WONG, KAREN

Inventor name: RISHER, MARK

Inventor name: ELLIOT, MIKE

Inventor name: TAN, JOSEPH

Inventor name: KETCHPEL, STEPHEN

Inventor name: LORANCE, JOHN

Inventor name: BIRMAN, MISHA

Inventor name: PATTERSON, JIM

RIN1 Information on inventor provided before grant (corrected)

Inventor name: DURAN, RAUL

Inventor name: HIRSCHFELD, BRIAN

Inventor name: VENKATRAMAN, MAYA

Inventor name: IVATT, GARETH

Inventor name: WONG, KAREN

Inventor name: RISHER, MARK

Inventor name: ELLIOT, MIKE

Inventor name: TAN, JOSEPH

Inventor name: KETCHPEL, STEPHEN

Inventor name: LORANCE, JOHN

Inventor name: BIRMAN, MISHA

Inventor name: PATTERSON, JAMES

17Q First examination report despatched

Effective date: 20040416

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN WITHDRAWN

18W Application withdrawn

Effective date: 20040705

RIN1 Information on inventor provided before grant (corrected)

Inventor name: DURAN, RAUL

Inventor name: HIRSCHFELD, BRIAN

Inventor name: VENKATRAMAN, MAYA

Inventor name: IVATT, GARETH

Inventor name: WONG, KAREN

Inventor name: RISHER, MARK

Inventor name: ELLIOT, MIKE

Inventor name: TAN, JOSEPH

Inventor name: KETCHPEL, STEPHEN

Inventor name: LORANCE, JOHN

Inventor name: BIRMAN, MISHA

Inventor name: PATTERSON, JAMES