US8571463B2 - Systems and methods for computerized interactive skill training - Google Patents

Systems and methods for computerized interactive skill training Download PDF

Info

Publication number
US8571463B2
US8571463B2 US11/669,079 US66907907A US8571463B2 US 8571463 B2 US8571463 B2 US 8571463B2 US 66907907 A US66907907 A US 66907907A US 8571463 B2 US8571463 B2 US 8571463B2
Authority
US
United States
Prior art keywords
challenge
trainee
user
response
score
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US11/669,079
Other versions
US20080182231A1 (en
Inventor
Martin L. Cohen
Edward G. Brown
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Breakthrough Performancetech LLC
Original Assignee
Breakthrough Performancetech LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Breakthrough Performancetech LLC filed Critical Breakthrough Performancetech LLC
Priority to US11/669,079 priority Critical patent/US8571463B2/en
Assigned to BREAKTHROUGH PERFORMANCE TECHNOLOGIES, L.L.C. reassignment BREAKTHROUGH PERFORMANCE TECHNOLOGIES, L.L.C. CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: ADVANCED LISTENING TECHNOLOGIES, LLC
Priority to AU2008210903A priority patent/AU2008210903B2/en
Priority to MX2009008131A priority patent/MX2009008131A/en
Priority to SG2012003273A priority patent/SG177988A1/en
Priority to EP08727554.1A priority patent/EP2118874A4/en
Priority to BRPI0807176-4A priority patent/BRPI0807176A2/en
Priority to JP2009547348A priority patent/JP2010517098A/en
Priority to PCT/US2008/050806 priority patent/WO2008094736A2/en
Priority to CA002676137A priority patent/CA2676137A1/en
Assigned to BREAKTHROUGH PERFORMANCETECH, LLC reassignment BREAKTHROUGH PERFORMANCETECH, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BROWN, EDWARD G., COHEN, MARTIN L.
Assigned to BREAKTHROUGH PERFORMANCETECH, LLC reassignment BREAKTHROUGH PERFORMANCETECH, LLC CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: BREAKTHROUGH PERFORMANCE TECHNOLOGIES, LLC
Publication of US20080182231A1 publication Critical patent/US20080182231A1/en
Priority to US14/056,763 priority patent/US9633572B2/en
Publication of US8571463B2 publication Critical patent/US8571463B2/en
Application granted granted Critical
Priority to US15/492,879 priority patent/US10152897B2/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/06Electrically-operated educational appliances with both visual and audible presentation of the material to be studied
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B19/00Teaching not covered by other main groups of this subclass
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B3/00Manually or mechanically operated teaching appliances working with questions and answers
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/08Electrically-operated educational appliances providing for individual presentation of information to a plurality of student stations
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B7/00Electrically-operated teaching apparatus or devices working with questions and answers
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B7/00Electrically-operated teaching apparatus or devices working with questions and answers
    • G09B7/02Electrically-operated teaching apparatus or devices working with questions and answers of the type wherein the student is expected to construct an answer to the question which is presented or wherein the machine gives an answer to the question presented by a student
    • G09B7/04Electrically-operated teaching apparatus or devices working with questions and answers of the type wherein the student is expected to construct an answer to the question which is presented or wherein the machine gives an answer to the question presented by a student characterised by modifying the teaching programme in response to a wrong answer, e.g. repeating the question, supplying a further explanation

Definitions

  • the present invention is directed to interactive training, and in particular, to methods and systems for computerized interactive skill training.
  • the present invention is directed to interactive training, and in particular, to methods and systems for computerized interactive skill training.
  • Certain example embodiments teach and train a user to utilize information and skills in a simulated real-world environment. For example, a user provides verbalized responses that engender relatively instant feedback. Users are optionally trained to provide information, respond to objections, and/or ask questions as appropriate, automatically or almost automatically, without undesirable pauses. Optionally, users are scored based on their retention of the information, and their ability to provide the information to others in a natural, confident manner. Thus, certain embodiments aid users in internalizing and behaviorally embedding information and skills learned during training. Furthermore, certain embodiments of performance drilling serve as a coaching and self-coaching tool.
  • An example embodiment provides a computerized training system comprising programmatic code stored in computer readable memory configured to: receive log-in information for a trainee and/or a facilitator; identify one or more training modules based at least in part on at least a portion of the login information; receive a selection of at least one of the identified training modules; cause the verbalization of a challenge in conjunction with a displayed human or animated person simulating a customer or prospect, wherein the challenge includes a statement or question; receive a first challenge score from the facilitator related to a verbalized trainee challenge response, wherein the first challenge score is related to the correctness and/or completeness of the challenge response; receive a second challenge score from the facilitator related to the verbalized trainee challenge response, wherein the second challenge score is related to how quickly the trainee provided the challenge response; receive a third challenge score from the facilitator related to the verbalized trainee challenge response, wherein the third challenge score is related to the confidence and/or style with which the trainee verbalized the challenge response; and randomly or pseudo-randomly select a next challenge that
  • An example embodiment provides a computerized training system comprising programmatic code stored in computer readable memory configured to: receive a selection of at least one training subject; provide the trainee with a challenge via a simulated customer or prospect related to a product or service corresponding to the selected training subject; receive a first challenge score from a facilitator related to a verbalized trainee response, wherein the first challenge score is related to the correctness and/or completeness of the challenge response; receive a second challenge score related to the verbalized trainee response, wherein the second challenge score is related to how quickly the trainee provided the challenge response; and receive a third challenge score from the facilitator related to the verbalized trainee response, wherein the third challenge score is related to the confidence and/or style with which the trainee verbalized the challenge response.
  • An example embodiment provides a method of providing training using a computerized system, the method comprising: receiving at a first computerized system a selection of a first training subject; accessing from computer memory a training challenge related to the first training subject; providing via a terminal the challenge verbally to a user, wherein the challenge is intended to simulate a question or statement verbally provided by a person in a conversation; storing in computer readable memory substantially immediately after the user verbally provides a challenge response a first score related to the correctness and/or completeness of the challenge response; and storing in computer readable memory substantially immediately after the user verbally provides the challenge response a second score related to how quickly the user provided the challenge response.
  • An example embodiment provides a method of providing training using a computerized system, the method comprising: accessing from computer memory a training information challenge; providing via a terminal the information challenge to a user, wherein the information challenge is intended to simulate a question or statement from another person; storing in computer readable memory a first score related to the correctness and/or completeness of an information challenge response provided by the user, and a second score related to the confidence timing of the information challenge response provided by the user, wherein the first and second scores are stored substantially immediately after the user provides the information challenge response; accessing from computer memory a training challenge related to an objection; providing via a terminal the objection challenge to a user, wherein the objection challenge is intended to simulate a question or statement from the person; and storing in computer readable memory a third score related to the correctness and/or completeness of an objection challenge response provided by the user, and a fourth score related to the timing of the objection challenge response provided by the user.
  • An example embodiment provides a method of providing training using a computerized system, the method comprising: accessing from computer memory a training challenge related to a first training subject; providing via a terminal the challenge verbally to a user, wherein the challenge is intended to simulate a question or statement from a person to simulate a real-time conversation with the person; and storing in computer readable memory at least one of the following substantially immediately after a challenge response is provided by the user; a first score related to the correctness and/or completeness of the challenge response provided by the user; a second score related to how quickly the user provided the challenge response; a third challenge score related to the confidence and/or style with which the user verbalized the challenge response.
  • FIG. 1 illustrates an example networked system that can be used with the training system described herein.
  • FIG. 2 illustrates an example process flow
  • FIGS. 3A-II illustrate example user interfaces.
  • the present invention is directed to interactive training, and in particular, to methods and systems for computerized interactive skill training. Certain embodiments also provide blended learning during the period of interactivity (e.g., computer-interaction learning with a human facilitator participating).
  • example embodiments utilize a processor-based training system to drill and train users with respect to acquiring certain information and skills.
  • Certain embodiments train a user to utilize the acquired information and skills in a simulated real-world environment where the user interacts with another person (real or simulated).
  • the user is scored based on their retention of the information, and their ability to provide the information to others in a natural, confident manner (e.g., without hemming and hawing).
  • the training system optionally enables real-time or post-testing scoring of user training sessions.
  • the scoring optionally includes sub-category scoring, consolidated category scoring and scoring which helps the user and others to help the user to focus upon areas that need significant or the greatest improvement.
  • the category can be financial transactions, and sub-categories can include saving deposits, withdrawals, debit cards, checking accounts, and credit cards.
  • the consolidated scoring report provides a total score, and subcategory scores report individual scores for corresponding subcategories, so the user and others can better understand the user's performance deficits at a more granular level, and can focus additional training on lower scored subcategories.
  • scoring can be deleted or otherwise not retained in computer accessible memory long term (e.g., it is removed from memory when the user logs out of the training system or earlier) and is optionally not printed out in hardcopy form, and instead a report is generated and stored indicating simply that the user needs help or further training with respect to certain categories/subcategories (e.g., without making a point or grade distinction with respect to other users that also need help).
  • the training optionally enables a user to provide answers and information to others in real life situations with customers and prospects in a manner that instills trust and confidence with respect to the user.
  • the training system aids users in internalizing and behaviorally embedding information and skills learned during training. Users are optionally trained to provide information, respond to objections, or ask questions, as appropriate, automatically or almost automatically, without undesirable pauses.
  • the training system as described herein can optionally be configured and used to provide training with respect to academic knowledge, and/or other skill improvement that involves verbalization, including relationship building.
  • Examples of categories include, but are not limited to, some or all of the following: Information (e.g., Product Information, information regarding an academic subject, information about a person, etc.), Objections (e.g., Product Objections, objections to a course of action, etc.), Generic Objections (e.g., generic objections to products or services), Service Queries, Resolving Service Problems, Dealing with Angry Customers, Dealing with Life Events (e.g., Divorce, Marriage, birth, Death, Travel, etc.), Making Referrals to Specialists, Differentiation and Orientation Statements, Sales, Service and Relationship Technique Drilling.
  • Information e.g., Product Information, information regarding an academic subject, information about a person, etc.
  • Objections e.g., Product Objections, objections to a course of action, etc.
  • Generic Objections e.g., generic objections to products or services
  • Service Queries e.g., generic objections to products or services
  • a user such as a trainee, utilizes a training terminal (e.g., a personal computer, an interactive television, a networked telephone, a personal digital assistant, an entertainment device, etc.) or other device, to access a training program stored locally or remotely in computer readable memory.
  • a training terminal e.g., a personal computer, an interactive television, a networked telephone, a personal digital assistant, an entertainment device, etc.
  • the user may be requested or required to log-in (e.g., provide a password and/or user name) to access the training program and/or one or more training modules.
  • the training system utilizes the log-in information and/or a cookie or other file stored on the user's terminal to determine what training scenarios/modules the user has already taken and/or completed, and/or scenarios/modules so that the system can automatically select the appropriate training module for the user and store the user's interactions and/or score in a database in association with the user identifier.
  • a library/cataloging function provides users/facilitators the ability to precisely choose the skill training desired.
  • the user/facilitator can choose full categories and/or sub-categories.
  • the system can present the user (or a trainer) with a selection of modules (e.g., in the form of categories/training sequences) the user and/or facilitator selects the desired module.
  • a training administrator can select and specify the module to be presented to the user.
  • the system automatically selects which module (or segment therein) is to be presented based on the user's training history (e.g., which modules the user has completed and/or based on corresponding module training scores), the user's training authorization, and/or other criteria.
  • the facilitator optionally acts as a coach.
  • modules are optionally provided which drill users on specific areas within their industry/job function.
  • certain modules may be intended to prepare users for dealing with actual customer or prospect challenges.
  • Modules may be focused on learning about products/services, comparisons of product/services (e.g., comparisons of product/services offered by the user's employer, comparisons of product/services offered by the user's employer with products/services of another company or other entity), handling customer complaints, resolving service issues, providing customers with background information on the company, etc.
  • Modules may also be focused on academic training and/or relationship building, etc.
  • modules in the form of challenge categories and sub-categories which can be presented to a user via a user interface.
  • a set of sub-sections can include Product Descriptions, Product Usage and Product Objections.
  • Product Descriptions means responding to a more general question/challenge regarding a product such as “tell me about your home equity lines of credit” or other product.
  • Product Usage means responding to a question/challenge regarding details or specifics of a products operation or usage such as, “exactly how does a home equity loan work?, Are there any fees when I use the account, are there any minimum credit balances I must maintain, what are the yearly fees if any?”, etc.
  • Product Objections are those objections and/or resistances expressed by customers and/or prospects regarding a specific product (or service) (e.g., “I don't see the need to get a platinum credit card when a gold or regular credit card will do”, “I won't pay a monthly fee to use checks”). This is distinct from Generic Objections, which are objections and/or resistances that would apply generally to products/services and/or situations (e.g., “I am not interested”, “I don't have time”, “I don't have the money,” “I don't like your organization”, etc.). In an example embodiment, Generic Objections are a separate performance drilling section.
  • a user and/or facilitator can limit a training session to one sub-section for one or more products, or the user and/or facilitator can drill down/navigate to other subsections.
  • randomized challenges presented during a training session can be limited to Product Descriptions and/or optionally there can be integrated product-related mastery training across several or all subsections for a product, such as Product Descriptions, Product Usage, and Product Objections.
  • the training is performed in a specified subsection sequence (e.g., first Product Descriptions, second Product Usage, and third Product Objections), or the training can include randomized challenges across the multiple subsections.
  • challenges can relate to comparisons, such as comparisons of product/services, people, places, etc.
  • the comparisons can include comparisons of products/services offered by the user's employer, comparisons of products/services offered by the user's employer with products/services of another company or other entity, and/or products and services of two or more other entities other than the user's employer.
  • a challenge can be a question regarding two different products or services, such as:
  • a predetermined or other threshold e.g., “four out of five” “two out of three”, “eight out of nine” or other threshold
  • a certain score e.g., a key elements score, explained in greater detail below
  • an automatic linkage is provided to another category (e.g., the Product/Service Usage category) so that the linked to category will next be tested.
  • a certain threshold e.g., “four out of five” in the Product/Service Usage category
  • there would be an automatic linkage to still another category e.g., the Product/Service Objections category.
  • additional and/or repeated challenges within the current category are presented to further drill the user in the current category until the user's score improves to meet the threshold (or another specified threshold).
  • the user needs to repeat the category drilling until the user scores the specified threshold before they are able to proceed to the next category.
  • a specified threshold e.g., “four out of five”
  • the system Before presenting the actual training user interfaces, the system optionally provides text, audio, and/or video instructions to the user that explain the purpose of the selected training module, how the user is to interact with the training program, the scoring process, and/or other information.
  • a trainer/facilitator is online and/or present when the user/trainee is undergoing training via the system.
  • the trainer may be sitting alongside the trainee, looking at the same terminal screen and/or the trainer may be viewing the screen of a separate trainer terminal which presents similar or the same user interfaces as viewed by the trainee, optionally with additional trainer information (e.g., training answers).
  • the trainer provides the trainee with instructions on how to utilize the training system and/or provides real time or delayed scoring of the trainee's training session, as described in greater detail below.
  • the system presents a user interface to the trainee that informs the trainee regarding the subject matter of the training session.
  • the system can be used to train a sales and/or service person in a particular industry (e.g., banking, finance, travel agency, automobile sales person, telephony, utilities, etc), train a person on how to relate in a personal situation (e.g., with a spouse, child, sibling, parent, girlfriend/boyfriend, etc.), train a person with respect to academic knowledge, or for other purposes.
  • a trainee may be informed that the training session provides training with respect to credit cards for minors.
  • the training may be intended to train a user in how to respond to a more open-ended question.
  • a question or comment may relate to a customer's or prospect's marital status, health, a trip, a residence, and/or a child.
  • the system can train the trainee how to respond to such questions or comments, which can take the following example forms:
  • the training optionally trains the user to overcome objections to a course of action proposed by the trainee to a customer/prospect.
  • the training may be intended to train the user in how to handle a customer that comes in with a service complaint (e.g., “The product does not work as described” or “Why wouldn't my funds transferred as instructed?”).
  • the training system optionally provides academic training related to subject matter taught in an a school or employer classroom setting, or otherwise (e.g. “Who are the first five Presidents of the United States; “List, in order, the 10 steps that need to be taken in order to approve a loan request”; “Who should you first attempt to contact in the event there has been a work accident”, etc.).
  • the training can be related to math, history, English, a foreign language, computer science, engineering, medicine, psychology, proper procedures at a place of employment, etc.
  • the training is not necessarily related to interaction with or challenges from another person, such as a customer, prospect, or family member.
  • the academic training can be used to reinforce training previously provided to the trainee.
  • the trainee is also informed of the different stages of a training session.
  • pre-study screens also referred to as user interfaces
  • the trainee is further informed that after the pre-study screen(s), the tested portion will begin.
  • the pre-study screens/user interfaces optionally include text, an audible verbalization of the text, and/or a video or animated figure synchronized with the verbalization.
  • the pre-study screen(s) is intended to familiarize the trainee with the elements and optionally, only the key elements that are to be tested to educate the trainee and/or so that the trainee will not feel that they are unfairly tested.
  • the training will be in the form of challenges that the trainee is asked to respond to. To overcome or successfully respond to these challenges, there are certain elements (e.g., key elements) that the trainee has to state.
  • the pre-study screen(s) will provide the trainee with the key elements necessary in responding to the challenges.
  • clients e.g., employers of trainees
  • the pre-study screens may be automatically or manually (e.g., by the trainer, user, and/or a system operator) turned off for one or more training sessions for a given user.
  • a bypass control e.g., a button or link
  • a facilitator may elect to activate the by-pass button because the user should already know what the pre-study key elements are based upon prior training.
  • pre-study screen(s) provides advanced real-world “stress testing”, where when dealing with a person/persons who verbalize a challenge, the recipient of the challenge typically does not have an opportunity to refer to “Pre-Study” materials.
  • Not presenting the pre-study screen e.g., at pre-determined times or randomly
  • turning of the pre-study screen(s) prior to a scored session enables the system to “pre-test” users' knowledge base before they are able to observe pre-study key element screens.
  • pre-study screens can serve as a motivator to the user if their score is sub-par, as well as to establish performance baselines.
  • the performance baseline scoring can be compared with scoring after the user has viewed the pre-study screens to provide the user/trainer/company with “before and after” evidence of progress.
  • a time limit may be set on how long the user can view/study a given pre-study screen and/or a set of pre-study screens.
  • a timer e.g., a count down timer, a color-coded timer (green, yellow, red) or other timer
  • This can serve too provide the user with a brief reminder of the key elements which they should have already pre-studied, but not to give them unlimited time to actually learn the key elements from scratch, which they should have already studied and learned.
  • this serves to limit the time the user spends on the entire module, so that the user does not take “20 minutes” (or other excessive period of time) to complete a module which should have been completed in eight minutes (or other period of time).
  • the ability to achieve the above is on a “key element by key element basis”. That is, as each key element screen is “brought up”, it can only be viewed for a limited period of time before the next key element screen is brought up, and so on.
  • the rate at which the key elements are presented is optionally pre-set or set during the session by the facilitator.
  • the above can be achieved by screens proceeding automatically to the next screen and/or screens “fading to black” at a set time period (e.g., pre-set or set during the session by the facilitator) and then having the next screen automatically coming up, etc.
  • a set time period e.g., pre-set or set during the session by the facilitator
  • the system enables a company to alter/adapt/change key elements based upon real world realities. For example, if it is discovered that the five existing key elements to answering a particular challenge are not as effective as a different set of key elements in the real world (even a change in a single key element), then the key elements for this particular objection are changed accordingly to match experiential realities.
  • the pre-study elements e.g., the key elements
  • the pre-study elements are optionally packaged or presented together so as to better train the trainee to respond to randomized challenges which better mimic real world situations. Additionally, certain elements (e.g., key elements), are kept focused (e.g., unitary, short) to enhance objective scoring and reduce subjective scoring.
  • the key elements may optionally be role modeled, verbalized, with the text of the key elements appearing as they are verbalized, for cognitive and behavioral embedding purposes. The text super-impositions are optionally highlighted as they are displayed.
  • the pre-study screen(s) there will be related elements that are not as essential as the key elements.
  • trainees will be instructed or advised to study these related elements and may be provided extra credit for identifying these when responding to the challenges.
  • the pre-study screen(s) are optionally consolidated to contain the key elements related to various challenges within the same category or module.
  • printing of the pre-study screens e.g., a listing of the elements
  • the printing of certain other user interfaces e.g., the challenge user interfaces and/or the model answer user interfaces
  • different challenges are repeated different numbers of times.
  • the selection of the challenges to be repeated and/or the repeat rate are purposely random or pseudo random to mimic the real world experience and to prevent rote memorization.
  • the more significant elements are weighted (e.g., by a person crafting the training) so that the more significant elements are or tend to be repeated more often than those elements that are considered less significant.
  • the weightings can be stored in computer readable memory and optionally automatically applied by the system.
  • the trainer can manually instruct, via a user interface control, that one or more select challenges are to be repeated (e.g., in a non-randomized fashion).
  • the challenges may include one or more of the following elements and/or other elements:
  • the challenges may be presented as displayed text, as part of a role playing scenario (e.g., where the user is presented with a scenario involving an animation or person playing an appropriate role, which presents the opportunity for the trainee to state/provide the elements), with the elements presented audibly, textually (optionally in an overlay over the video portion), and/or otherwise.
  • a role playing scenario e.g., where the user is presented with a scenario involving an animation or person playing an appropriate role, which presents the opportunity for the trainee to state/provide the elements
  • the elements may be those considered by the trainee's management to be more significant or key so that the trainee is not overly burdened with having to remember all related elements (which can optionally be accessed instead during a real-life interaction via a computer or otherwise, after the trainee has built credibility and trust with an actual customer or prospect, wherein the credibility and trust is the result, at least in part of the trainee's ability to respond without having to read from a list, manual, brochure, etc).
  • the trainee's management or other authorized personnel can specify, select, or modify the elements as desired.
  • the trainee's management/employer By optionally placing the burden on the trainee's management/employer to identify the more significant elements, they are encouraged to better understand and identify what is expected from employees performing a given job function.
  • the test portion includes a scene having one or more people (real or animated) playing an appropriate role, such as that of a customer, prospect, a family member, or other person as appropriate for the skill being trained.
  • the actors playing the roles can read a script relevant to the field and skill being trained.
  • the script includes “challenges” (e.g., questions, statements, or information) randomly or pseudo randomly presented, or presented in a predetermined order to the trainee.
  • the challenges are optionally verbalized and/or acted out by a real or animated person/actor.
  • the person or people in the scene may or may not be lipped-synced to a verbalization of the script.
  • the person or people in the scene may be of different ethnicities as selected by the employer, the facilitator, the training system provider, or other entity.
  • the speech patterns and/or accents of the person or people in the scene may be selected by the employer, the facilitator, the training system provider or other entity.
  • the foregoing selection may be made from a menu presented on a terminal (e.g., a menu listing one or more ethnicities and/or accents) and stored in memory.
  • the trainee is expected to respond with the appropriate element(s) taught during the pre-training session.
  • a timer e.g., a countdown timer
  • the trainee provides the response verbally, but may also do so by typing/writing in the response, by selecting the response from a multiple choice offering, or otherwise.
  • the system automatically and/or in response to a trainer instruction, presented the correct answer to the trainer.
  • the trainee will then be graded/scored based on one or more of the following elements.
  • a trainee that provides an appropriate element, but that was too slow or too fast in providing the appropriate element so that it would appear to a real customer as being unnatural, and/or appeared to be/sounded nervous when providing that element will not receive a “perfect” score for that element.
  • the trainee will be graded on how closely the text of the element(s) recited by the trainee matches that provided to the trainee on the answer screens, which matches the key elements on the pre-study screens.
  • a countdown timer is set to a certain value during a challenge response period and the trainee has to provide the challenge response before the timer reaches a certain point (e.g., 0 seconds).
  • the current countdown time can be displayed to trainee in a “seconds” format, and/or in other formats related to how much time is remaining (e.g., green for a first amount of time, yellow for a second amount of time, and red for a third amount of time).
  • the trainee's score is based on the timer value at the time the trainee provided the response.
  • a potential score is displayed which is decremented as the timer counts down, and the trainee is assigned the score displayed when the trainee provides the response.
  • a system operator and/or the facilitator can set the initial countdown time and/or the rate of the score reduction.
  • the facilitator can reset or change the timer value in real-time or otherwise.
  • key elements for correct answers will be in the “correct order/sequence”. That is, what the client and/or training implementer believes or has identified as the preferred presentation sequence.
  • the user is graded on the correctness of the sequence of their answer as well.
  • an actor may play a bank customer or prospect.
  • the trainee observes the scene, and recites the appropriate element(s) at the appropriate time in response to questions asked by or information offered by the bank customer or prospect which may relate to banking services. For example, if the trainee is being trained to recommend and/or offer information regarding a checking account for minors, the actor may ask questions regarding why a minor needs a checking account, the costs associated with a checking account, and the risks associated with a minor having a checking account.
  • the trainee is expected to respond to the customer questions/information with the element(s) (e.g., the key elements) taught during the pre-training session.
  • the trainee is not permitted to refer to notes or other materials (e.g., printed materials, such as books or course handouts) during the testing phase.
  • the trainee's response may be observed (e.g., listened to and/or viewed) in substantially real-time by the trainer.
  • the trainee's response is recorded (e.g., a video and/or audio recording) by the terminal or other system for later playback by a trainer and/or the trainee, and/or for later scoring.
  • embedded or otherwise associated with the audio track and/or video track of the scene is computer-readable digital metadata that identifies where/when a challenge occurs in the track, what the challenge is, and/or the element that the trainee is to provide.
  • the correct elements are automatically (or optionally in response to a trainer or trainee action) presented to the trainer, optionally using the same text as presented to the trainee during the pre-training phase.
  • the trainer terminal can present the same scene being observed by the trainee, wherein an indication is provided to the trainer as to when the trainee is to provide an element, and the system presents the correct element via a textual overlay with respect to the scene. This enables the trainer to efficiently score the trainee based on the element (if any) provided by the trainee. In addition, the trainer may score the confidence and naturalness/timing with which the trainee provided the element, as similarly discussed above.
  • the score may be entered by the trainer into a scoring field presented via the trainer terminal.
  • the scores are entered and stored in computer memory substantially immediately after the trainee provides a verbal challenge response (e.g., within 15 second, 30 seconds, or 60 seconds).
  • several scoring fields are provided so that the trainer can enter scores for different aspects of the trainee's provision of the element. For example, there may be a “correct element” field, a “level of confidence” field, a “naturalness of response” field, and/or a “timing of response” field.
  • the field may enable the trainer to enter (or select) a number score (e.g., 1-5), a letter score (e.g., A-F), a phrase (e.g., excellent, good, fair, poor), or other score.
  • scoring icons e.g., circular scoring icons
  • the facilitator will click on a scoring icon to provide the trainee a point (or other score) for identifying a key element.
  • the icon originally white, will turn green to signify the user has correctly identified a key element.
  • Other colors/indicators can be used as well. If the facilitator clicks on these scoring icons in error, they have the option of re-clicking on the scoring icon(s) (or otherwise correcting the scoring error). This will return the icon to white and no points will be calculated.
  • the system automatically scores one or more aspects of the trainee's performance.
  • the system can detect (e.g., via sound received via a microphone coupled to the trainee terminal, wherein input received via the microphone is translated into a digital value) how long it takes the trainee to begin providing an element after a “challenge” (as identified to the training system via the metadata discussed above), and score the speed of the trainee's response and/or provide the actual elapsed time between the challenge and the trainee's response and/or present the elapsed time to the trainer.
  • the scoring of the immediacy of response and confidence rather than solely providing a blended score of the two, aids the user/trainer in better understanding more precisely the precise learning and performance deficits of the trainee.
  • the trainer can also provide textual/verbal comments (or optionally select predefined comments presented to the trainer via a user interface) regarding the trainees confidence and the naturalness of the trainees response.
  • the trainer's user interface can include a text field via which the trainer can enter comments.
  • the scores for two or more aspects of trainee's provision of an element may be combined into a single score (e.g., as an average score, which is optionally weighted). For example, if the trainee received a score of 5 for appropriateness/correctness of the element a score of 3 for the trainee's confidence, and a score of 2 for the trainee's naturalness, an average score of 3.33 may be calculated and assigned to the trainee's answer. Different aspects of the trainee's provision of an element can be assigned corresponding different weightings. By way of example, the combined score can be calculated using the following weighted average formula (although other formulas may be used as well).
  • TotalMaximumScore is the maximum score that can be awarded for the answer
  • W is the weighting for a corresponding aspect of the answer
  • Score is the score awarded for a corresponding aspect
  • MaximumPossible is the maximum possible score that can be assigned for the corresponding aspect.
  • the system calculates and assigns to a the trainee's answer a score of 3.75 out of a maximum of 5.
  • a total score can be assigned for multiple answers provided by the trainee using an average, a weighted average, or other calculation based on the scores received for individual answers and/or aspects thereof.
  • the score for a given answer and the current total is automatically calculated in substantially real time as the trainee submits answers (or fails to submit answers), with the running total displayed via the trainer terminal and/or the trainee terminal.
  • the training system provides the scores to the trainer and/or the trainee via an electronic and/or hardcopy report generated by the system.
  • scoring can be by each sub-category or for a total category. If for a total category, a final combined score from sub-categories is presented (e.g., automatically presented or in response to a trainer command).
  • a best to worst rank order scoring (or worst to best rank order scoring) by sub-categories will be presented. This will allow the user/facilitator to know where to focus subsequent training based upon strengths and weaknesses.
  • the specific sub-category that should be studied/repeated is displayed.
  • the user/facilitator can limit the scoring report so that only the scores for those sub-categories that the user needs further training on (e.g., as determined by the system based on the failure of the user to score at least a certain specified threshold) are reported to the user/facilitator.
  • different challenges will be repeated a different number of times.
  • the selection of the challenges to be repeated and/or the repeat rate are random or pseudo random.
  • the more significant or otherwise selected challenges are weighted so that they are or tend to be repeated more often than those challenges that are considered less significant. This weighting promotes the testing of more significant and/or more difficult to acquire skills/information.
  • the trainee is presented with a model answer, with the corresponding element displayed and/or verbalized.
  • verbalized optionally the verbalization is provided with a confident sounding voice that the user should emulate.
  • the key elements provided in the answers are bolded, highlighted, underlined, or otherwise visually emphasized as compared to the sentence/phrase structure in which they are incorporated.
  • the key elements provided in the model answer are optionally role modeled, verbalized, with the text of the key elements appearing in a super-imposed manner as they are verbalized, for cognitive and behavioral embedding purposes. The text super-impositions are optionally highlighted as they are displayed.
  • the model answer is automatically presented and/or is presented in response to a trainee instruction (e.g., issued via a user interface presented via the trainee terminal).
  • a trainee instruction e.g., issued via a user interface presented via the trainee terminal.
  • the model answer is provided (e.g., textually and/or verbalized) with the element still displayed.
  • the elements are introduced one at a time, until all the relevant elements are displayed.
  • the revealed elements correspond to the model answer.
  • the trainee can take notes while the element and model answer are presented.
  • a “notes” field is presented on the trainee terminal wherein the trainee can enter notes, which will then be saved in computer memory.
  • the notes can optionally be printed and/or later accessed by the trainee.
  • the trainer can instruct the system to repeat a selected challenge or module.
  • the training system automatically repeats the challenge and/or module if the trainee's score falls below a threshold defined by the system, the trainer, the trainee's employer, the trainee and/or other designated person. For example, optionally a challenge and/or module is repeated if the trainee received less than a perfect score to thereby better drill the trainee to be able to provide correct answers that include the appropriate significant elements, without hesitation and in a confident manner.
  • the system automatically presents the trainee with one or more challenges that the trainee had successfully mastered (e.g., as determined by the trainee's score) in one or more previous training sessions.
  • Such “surprise drilling sections” help test and reinforce the trainee's retention of information and skills obtained during training.
  • a training session can be presented in the form of a game to help keep the trainee's interest and/or to enhance the training impact. For example, when the trainee receives a score above a specified threshold, something pleasing happens (e.g., a certain tune is played, a certain image/video is displayed, a piece of an electronic puzzle is awarded, the trainee earns points/weapons/attributes that can be used in an electronic game, etc.).
  • the training can be presented in a format wherein the trainee must answer questions correctly (e.g., receive a predetermined score) in order to defeat an adversary (e.g., a simulated robot or alien).
  • there can be multiple players participating in a game where if the first to answer is incorrect, then others have the chance to answer and score.
  • a user's verbalized responses are recorded by hitting a “record” button. These recorded responses are immediately (or in a delayed fashion) played back via a playback button.
  • the objective in this example embodiment is to provide the user with substantially instant feedback about how the user sounds from a style and/or attitude perspective.
  • the facilitator/trainer asks questions of the user regarding the user's perception of the user's style and/or attitude. Examples of these questions are:
  • the questions are verbalized by a pre-recorded or synthesized voice at substantially the same time as text is displayed.
  • each question is “asked” separately.
  • two or more questions are asked together.
  • the user/facilitator presses a “proceed” button (or other corresponding control) and the next question is asked, and so on.
  • a control on the trainee and/or facilitator user interface e.g., a save recording icon that can be activated by the trainee and/or facilitator
  • there can be standard questions e.g., 1, 2, 3, 4, 5, or more questions
  • these questions can be customized.
  • the same questions can be asked each time (e.g., “How do you think you sounded?”, “How could you improve your response?”, etc.) or the system instead can ask different questions for different types of challenges. (e.g., for an objection, the system could ask “Do you feel you have overcome the customer's objections?”).
  • Web site is used to refer to a user-accessible network site that implements the basic World Wide Web standards for the coding and transmission of hypertextual documents. These standards currently include HTML (the Hypertext Markup Language) and HTTP (the Hypertext Transfer Protocol). It should be understood that the term “site” is not intended to imply a single geographic location, as a Web or other network site can, for example, include multiple geographically distributed computer systems that are appropriately linked together. Furthermore, while the following description relates to an embodiment utilizing the Internet and related protocols, other networks, such as networks of interactive televisions or of telephones, and other protocols may be used as well.
  • the functions described herein are preferably performed by executable code and instructions stored in computer readable memory and running on one or more general-purpose computers.
  • the present invention can also be implemented using special purpose computers, other processor based systems, state machines, and/or hardwired electronic circuits.
  • certain process states that are described as being serially performed can be performed in parallel.
  • terminals including other computer or electronic systems
  • PDA personal digital assistant
  • IP Internet Protocol
  • cellular telephone or other wireless terminal a networked game console, a networked MP3 or other audio device, a networked entertainment device, and so on.
  • the user input can also be provided using other apparatus and techniques, such as, without limitation, voice input, touch screen input, light pen input, touch pad input, and so on.
  • voice input touch screen input
  • touch pad input and so on.
  • the following description may refer to certain messages or questions being presented visually to a user via a computer screen, the messages or questions can be provided using other techniques, such as via audible or spoken prompts.
  • One example embodiment utilizes a computerized training system to enhance a trainee's listening comprehension.
  • the training can be delivered via a terminal, such as a stand-alone personal computer.
  • the training program may be loaded into the personal computer via a computer readable medium, such as a CD ROM, DVD, magnetic media, solid state memory, or otherwise, or downloaded over a network to the personal computer.
  • the training program can be hosted on a server and interact with the user over a network, such as the Internet or a private network, via a client computer system or other terminal.
  • a client computer system can be a personal computer, a computer terminal, a networked television, a personal digital assistant, a wireless phone, an interactive personal media player, or other entertainment system.
  • a browser or other user interface on the client system can be utilized to access the server, to present training media, and to receive user inputs.
  • a training system presents a scenario to a user via a terminal, such as a personal computer or interactive television.
  • the scenario can be a pre-recorded audio and/or video scenario including one or more segments.
  • the scenario can involve a single actor or multiple actors (e.g., a human actor or an animated character) reading a script relevant to the field and skill being trained.
  • the actors may be simulating an interaction between a bank teller or loan officer and a customer.
  • the simulated interaction can instead be for in-person and phone sales or communications.
  • the actors may be simulating an interaction between a parent and a child.
  • the pre-recorded scenario can involve a real-life unscripted interaction.
  • FIG. 1 illustrates an example networked training system including a Web/application server 110 , used to host the training application program and serve Web pages, a scenario database 112 , that stores prerecorded scenario segments, and a user database 114 that stores user identifiers, passwords, training routines for corresponding users (which can specify which training categories/scenarios are to be presented to a given user and in what order), training scores, recordings of training sessions, and user responses provided during training sessions.
  • the training system is coupled to one or more trainee user terminals 102 , 104 , and a trainer terminal 106 via a network 108 , which can be the Internet or other network.
  • FIG. 2 illustrates an example training process.
  • the trainee and/or the trainer logs in to the training system via a training terminal.
  • the training system can utilize the log-in information to access account information for the trainee and/or trainer.
  • the account information optionally includes an identification of the training categories/modules that the trainee and/or trainer are authorized to access.
  • a training module is selected.
  • a training category/module can be selected by the trainee or the trainer from a menu of modules presented on the training terminal, wherein the menu includes modules that the trainee/trainer are authorized to access.
  • the system automatically selects the module based on the trainee's training history (e.g., the modules that the trainee has previously utilized and/or the trainee's scores).
  • a preparatory training session begins.
  • the preparatory training session presents information, questions, statements, and the like (sometimes referred to herein generally as materials), related to the training subject, where the trainee is expected to utilize some or all of the presented materials during the subsequent testing session.
  • the tested portion of the training begins.
  • the trainee is presented with a scenario related to the materials taught during the preparatory portion of the training.
  • the scenario will include one or more challenges by an actor pretending to be a customer, prospect, or other person relevant to the training subject.
  • the scenario may provide via a video recording of a person or animation (e.g., a FLASH animation).
  • the challenges can be in form of a question (e.g., a question about a product or service, a question asking advice regarding a relevant matter, etc.) or a statement (e.g., a statement regarding the customer's current condition or plans, or regarding a product or service).
  • the trainee provides an answer in response to the challenge.
  • the trainer and/or the training system score the answer.
  • the correct elements are displayed to the trainer with corresponding scoring icons (e.g., circular scoring icons, or other desired shape) that the trainer can use to score the user.
  • the answer may be scored on the correctness of the answer, the quickness with which the answer was given, the naturalness of the answer, and/or the confidence and style in which the answer is given.
  • the model answer is presented via multimedia (e.g., optionally audibly optionally in conjunction with the video portion of the scenario) to the trainee on the training terminal, optionally with the corresponding materials presented textually.
  • the elements of the material may be presented in overlay format over the module scenario video/animation.
  • next scenario is presented.
  • the next scenario can be randomly or pseudo-randomly selected, or the system optionally may enable the trainer and/or trainee to select the next scenario.
  • the next scenario is optionally a repeat of a previously presented scenario (e.g., from the current module or from another module).
  • the next scenario is optionally selected based on the trainee's score on a previous scenario.
  • a determination is made that the testing portion of the module has been completed the process proceeds to state 220 , and a test report is generated optionally including the scores for each challenge, a total score for each challenge, and/or a total score for the module.
  • a test report is generated optionally including the scores for each challenge, a total score for each challenge, and/or a total score for the module.
  • the process ends or the trainee/trainer selects another module.
  • An example user interface can provide an introduction to the training system, including instructions on how to access training system drill categories.
  • a link to a log in page is optionally provided and the user is instructed to log into the system and choose a training module with the corresponding desired category and sub-category of drilling.
  • a user interface is provided via which the user or facilitator has the option of clicking on an “Access Pre-Study Screen” icon.
  • the Pre-Study user interface contains important or key elements the user needs to know in order to correctly answer the upcoming challenges.
  • the user can skip the pre-study user interface (e.g., if the user is already familiar with the category and sub-category and/or does not feel the need to view the pre-study user interface) by clicking a “Skip Pre-Study & Hear Challenge” icon or the like.
  • the user is further informed that when the simulated character stops the user is to provide a first person answer/response to the challenge, and that the correct answers are based upon the elements (e.g., the key elements provided via the pre-study user interface) chosen by the user's company or by the creators of the training system.
  • the user is informed that when the user stops verbalization of the challenge response, the user or the facilitator should click on an “Answer” icon to obtain the correct answer(s).
  • the user is informed that the training system will now have a simulated figure verbalize a “challenge” (e.g., a statement or question). The user then responds to the challenge.
  • the system can now display elements (e.g., the key elements) of the guideline correct answer, which matches or substantially matches what they have learned in previous training and/or from the Pre-Study screen(s).
  • the answer user interfaces can display the elements and/or scores to the user/facilitator jointly, or the facilitator alone.
  • the scoring can include a score for the elements recited by the user, a score of immediacy of response, and a score for confidence and style
  • a user interface where the user/facilitator can instruct the system via a control (e.g., “Click to hear Model Answer” icon) to verbalize the model answer with the key elements graphically super-imposed on the screen.
  • the elements e.g., the key elements
  • the elements are super-imposed and build up one at a time to display the elements previously revealed on the Pre-Study Screen(s). The revealing of each of these elements correlates with the verbalized Model Answer.
  • a control is provided via which the user/facilitator has the ability to repeat this particular “challenge” if they choose to. For learning purposes, if the user gets anything less than a “perfect score” (or other designated score), then they should repeat the sub-category to better ensure they can produce a fluid answer that contains the key elements, without hesitation and in a confident manner.
  • Another user interface is provided including a “proceed” control, which when activated, causes the system to verbally state another randomized challenge and the process continues.
  • the training module will inform the user and facilitator when all of the challenges within the sub-category have been completed.
  • the user and facilitator can elect to repeat the sub-category or move on to a different sub-category and/or category. Such repetitive drilling will better ensure trainees will learn the materials.
  • FIGS. 3A-II illustrate example user interfaces which can be presented via a Web browser or other application. Additional, fewer, or different user interfaces can be used in other example embodiments.
  • the language provided in the example user interfaces are not intended to limit the invention, but are merely examples of language that can be used. Other language and terminology can be used as well.
  • the example interfaces include controls (e.g., in the form of underlined links, buttons, or otherwise) that can be used to navigate the training session (e.g., proceed, next page, click for answer, etc.).
  • some or all of the text displayed via the user interface can instead or in addition be verbalized.
  • FIG. 3A illustrates an example introduction to the training system, including a listing of example training subject categories and sub-categories.
  • FIG. 3B illustrates example instructions that are optionally presented to a facilitator and/or user which describe how to access a training module.
  • the instructions provide a Website URL (uniform resource locator) via which the trainer or trainee can log-in and access the training program.
  • the user is further instructed to choose the desired training category/subcategory.
  • the instructions further instruct the trainee/trainer how to initiate the preparatory/pre-training phase of the training module, and provide information as to what to expect during the preparatory/pre-training phase.
  • the example instructions indicate that after the preparatory/pre-training phase, the “challenge” phase of the training module will be initiated, wherein a video/animated character will verbalize a challenge (e.g., a statement or question).
  • a challenge e.g., a statement or question.
  • the trainee is instructed how to respond to the challenges and how the trainee's answers will be scored. Instructions are further provided on how to access model answers, and what to expect while accessing model answers.
  • the trainer/trainee are informed that the scenario can be repeated, that additional challenges will be presented, and that the module can be repeated.
  • FIG. 3C illustrates an example user interface informing the trainee that a practice training session is about to begin. The trainee is further informed that the facilitator has logged in, is participating in the training session, and has selected a training category/subcategory (e.g., alphabets/American).
  • FIG. 3D illustrates a user interface that provides an explanation to the trainee regarding the upcoming challenge testing, and provides instructions on how the trainee is to respond to the challenges (e.g., in the first person, as soon as the trainee hears a challenge).
  • a figure e.g., an animated figure or actual person is displayed which appears to verbalize the instructions.
  • FIG. 3E illustrates an example challenge in the form of a question.
  • FIGS. 3F-3G illustrate example scoring user interfaces, wherein scores (e.g., numerical scores) are assigned for the correctness of a trainee response to a challenge, the timeliness of the response, and the confidence with which the response was delivered.
  • FIG. 3F illustrates an example answer-scoring user interface used to score the correctness of trainee's challenge response.
  • the scoring user interface can be completed by the trainee, the facilitator, or both jointly, as permitted by the training rules.
  • FIG. 3G illustrates a scoring user interface used to score the timeliness of the trainee's challenge response (e.g., whether the response was undesirably delayed or immediate).
  • FIG. 3H illustrates a scoring interface used to score the confidence with which the trainee responds to the challenge (e.g., not confident, somewhat confident, confident), with a different number of points assigned based on the level of confidence.
  • FIG. 3I illustrates a user interface that explains that a model answer will be presented listing the elements that should have been provided in response to the challenge. The trainee is further informed that notes may be taken (e.g., to enhance the trainee's performance).
  • FIG. 3J illustrates the correct elements, wherein the elements are optionally sequentially and cumulatively displayed and verbalized, so that when the last element is displayed all the elements are displayed.
  • FIG. 3K illustrates the calculated total score for the trainee's challenge response.
  • a “repeat scenario” control is provided in this example, which when activated by the trainee/trainer causes the challenge to be repeated.
  • a “proceed” control is provided that, when activated, causes the system to present the next challenge.
  • the next challenge is randomly/pseudo-randomly selected by the training system.
  • the trainer or trainee selects the next challenge.
  • FIG. 3L illustrates the presentation of the next challenge.
  • FIGS. 3M-3O illustrate example scoring user interfaces for the second challenge, wherein scores (e.g., numerical scores) are assigned for the correctness of a trainee response to a challenge, the timeliness of the response, and the confidence with which the response was delivered.
  • FIG. 3P illustrates a user interface that explains that a model answer will be presented listing the elements that should have been provided in response to the challenge. The trainee is further informed that notes may be taken (e.g., to enhance the trainee's performance).
  • FIG. 3Q illustrates the key elements, wherein the element is optionally verbalized.
  • FIG. 3R illustrates the calculated total score for the trainee's challenge response.
  • FIG. 3S illustrates an example user interface congratulating the trainee on completing the challenges in the present module.
  • the trainee is further instructed to activate a “proceed” control to further utilize the current training module, or to activate and “exit module” control in order to proceed to the main menu (e.g., where the trainer/trainee can select another training module).
  • FIG. 3T illustrates a user interface that explains to the trainee that the trainee will not participate in a training module in a different category (product knowledge/checking accounts).
  • FIG. 3U illustrates a user interface that provides an explanation to the trainee regarding the upcoming challenge testing and provides instructions on how the trainee is to respond to the challenges. Optionally, a figure is displayed which appears to verbalize the instructions.
  • FIG. 3V illustrates a user interface where another challenge is presented, this one related to a checking account for minors.
  • FIGS. 3W-3Y illustrate example scoring user interfaces for the second challenge, wherein scores (e.g., numerical scores) are assigned for the correctness of a trainee response to a challenge, the timeliness of the response, and the confidence with which the response was delivered.
  • scores e.g., numerical scores
  • FIG. 3Z illustrates a user interface that explains that a model answer will be presented listing the elements that should have been provided in response to the challenge. The trainee is further informed that notes may be taken (e.g., to enhance the trainee's performance).
  • FIG. 3 AA illustrates the correct elements, wherein the elements are optionally sequentially and cumulatively displayed and verbalized, so that when the last element is displayed, all the elements that the trainee should have provided are displayed.
  • FIG. 3 BB illustrates the calculated total score for the trainee's challenge response.
  • a “repeat challenge(s)” control is provided in this example, which when activated by the trainee/trainer causes the challenge to be repeated, but, optionally, only after the final scores are calculated.
  • a “see score” control is provided that, when activated, causes the current score for the challenge to be presented.
  • FIG. 3 CC illustrates another example category selection user interface via which the trainee or the facilitator can select a training category (e.g., a different product or service to be trained on).
  • a training category e.g., a different product or service to be trained on.
  • the example user interface illustrated in FIG. 3 DD is presented via which a category focus can be selected (e.g., Product/Service Description, Product/Service, Product/Service Objections).
  • FIGS. 3 EE- 3 GG illustrates another set of example user interfaces providing key elements during the pre-training stage.
  • the corresponding model answers are optionally the same as the key elements.
  • the key elements and model answers can be presented textually and/or audibly. Certain portions of the key elements and model answers are emphasized to highlight the important concepts.
  • FIG. 3 HH illustrates another example user interface providing scoring for a challenge response.
  • FIG. 3 II illustrates another example user interface providing a scoring summary for a training session.
  • the summary includes scoring for drilling in three different product categories.
  • scores are displayed in rank order with the weakest/lowest score on top and strongest/highest score at the bottom within each component of the score.
  • a pre-recorded or synthesized voice tells the users how they did (e.g., a voice that simulates a robot speaking in a stilted manner, pirate, or other not real or fictional character).
  • the indications can instead or in addition to provided via text or with a “traditional” voice as well.
  • training can be done in a self-study mode without a trainer/facilitator.
  • the system can be used for self-coaching.
  • a replay option is provided for each category and/or sub-category (or a subset thereof) for performance improvement.
  • instructions will be verbalized.
  • certain embodiments teach and train a user to utilize information and skills in a simulated real-world environment.
  • the user optionally undergoes extensive testing, where their performance is scored based on their retention of the information, and their ability to provide the information to others in a natural, confident manner.
  • the training system aids users in internalizing and behaviorally embedding information and skills learned during training. Users are optionally trained to provide information, respond to objections, or ask questions as appropriate almost automatically, without undesirable pauses.

Abstract

The present invention is directed to interactive training, and in particular, to methods and systems for computerized interactive skill training. An example embodiment provides a method and system for providing skill training using a computerized system. The computerized system receives a selection of a first training subject. A training challenge related to the first training subject is accessed from computer readable memory. The training challenge is provided to a user via a terminal, optionally in verbal form. A first score related to the correctness and/or completeness of a verbalized challenge response provided by the user is stored in memory. A second score related to how quickly the trainee provided the verbalized challenge response is stored in memory. A third challenge score related to the confidence and/or style with which the trainee verbalized the challenge response is stored in memory.

Description

COPYRIGHT RIGHTS
A portion of the disclosure of this patent document contains material that is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by any one of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever.
CROSS-REFERENCE TO RELATED APPLICATIONS
Not applicable.
STATEMENT REGARDING FEDERALLY SPONSORED R&D
Not applicable.
PARTIES OF JOINT RESEARCH AGREEMENT
Not applicable.
REFERENCE TO SEQUENCE LISTING, TABLE, OR COMPUTER PROGRAM LISTING
Not applicable.
BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention is directed to interactive training, and in particular, to methods and systems for computerized interactive skill training.
2. Description of the Related Art
Many conventional skill training techniques tend to train users in how to respond to test questions, typically by multiple choice, true/false, or written sentence completion, rather than providing adequate training on using those skills in a real-world environment. That is, interpersonal verbal responses
Further, many conventional techniques for testing skills fail to adequately evaluate users' ability to utilize their skills in a real-world environment. That is, verbal interactions.
SUMMARY OF THE INVENTION
The present invention is directed to interactive training, and in particular, to methods and systems for computerized interactive skill training.
Certain example embodiments teach and train a user to utilize information and skills in a simulated real-world environment. For example, a user provides verbalized responses that engender relatively instant feedback. Users are optionally trained to provide information, respond to objections, and/or ask questions as appropriate, automatically or almost automatically, without undesirable pauses. Optionally, users are scored based on their retention of the information, and their ability to provide the information to others in a natural, confident manner. Thus, certain embodiments aid users in internalizing and behaviorally embedding information and skills learned during training. Furthermore, certain embodiments of performance drilling serve as a coaching and self-coaching tool.
An example embodiment provides a computerized training system comprising programmatic code stored in computer readable memory configured to: receive log-in information for a trainee and/or a facilitator; identify one or more training modules based at least in part on at least a portion of the login information; receive a selection of at least one of the identified training modules; cause the verbalization of a challenge in conjunction with a displayed human or animated person simulating a customer or prospect, wherein the challenge includes a statement or question; receive a first challenge score from the facilitator related to a verbalized trainee challenge response, wherein the first challenge score is related to the correctness and/or completeness of the challenge response; receive a second challenge score from the facilitator related to the verbalized trainee challenge response, wherein the second challenge score is related to how quickly the trainee provided the challenge response; receive a third challenge score from the facilitator related to the verbalized trainee challenge response, wherein the third challenge score is related to the confidence and/or style with which the trainee verbalized the challenge response; and randomly or pseudo-randomly select a next challenge that is to be provided to the trainee.
An example embodiment provides a computerized training system comprising programmatic code stored in computer readable memory configured to: receive a selection of at least one training subject; provide the trainee with a challenge via a simulated customer or prospect related to a product or service corresponding to the selected training subject; receive a first challenge score from a facilitator related to a verbalized trainee response, wherein the first challenge score is related to the correctness and/or completeness of the challenge response; receive a second challenge score related to the verbalized trainee response, wherein the second challenge score is related to how quickly the trainee provided the challenge response; and receive a third challenge score from the facilitator related to the verbalized trainee response, wherein the third challenge score is related to the confidence and/or style with which the trainee verbalized the challenge response.
An example embodiment provides a method of providing training using a computerized system, the method comprising: receiving at a first computerized system a selection of a first training subject; accessing from computer memory a training challenge related to the first training subject; providing via a terminal the challenge verbally to a user, wherein the challenge is intended to simulate a question or statement verbally provided by a person in a conversation; storing in computer readable memory substantially immediately after the user verbally provides a challenge response a first score related to the correctness and/or completeness of the challenge response; and storing in computer readable memory substantially immediately after the user verbally provides the challenge response a second score related to how quickly the user provided the challenge response.
An example embodiment provides a method of providing training using a computerized system, the method comprising: accessing from computer memory a training information challenge; providing via a terminal the information challenge to a user, wherein the information challenge is intended to simulate a question or statement from another person; storing in computer readable memory a first score related to the correctness and/or completeness of an information challenge response provided by the user, and a second score related to the confidence timing of the information challenge response provided by the user, wherein the first and second scores are stored substantially immediately after the user provides the information challenge response; accessing from computer memory a training challenge related to an objection; providing via a terminal the objection challenge to a user, wherein the objection challenge is intended to simulate a question or statement from the person; and storing in computer readable memory a third score related to the correctness and/or completeness of an objection challenge response provided by the user, and a fourth score related to the timing of the objection challenge response provided by the user.
An example embodiment provides a method of providing training using a computerized system, the method comprising: accessing from computer memory a training challenge related to a first training subject; providing via a terminal the challenge verbally to a user, wherein the challenge is intended to simulate a question or statement from a person to simulate a real-time conversation with the person; and storing in computer readable memory at least one of the following substantially immediately after a challenge response is provided by the user; a first score related to the correctness and/or completeness of the challenge response provided by the user; a second score related to how quickly the user provided the challenge response; a third challenge score related to the confidence and/or style with which the user verbalized the challenge response.
BRIEF DESCRIPTION OF THE DRAWINGS
Embodiments of the present invention will now be described with reference to the drawings summarized below. These drawings and the associated description are provided to illustrate example embodiments of the invention, and not to limit the scope of the invention.
FIG. 1 illustrates an example networked system that can be used with the training system described herein.
FIG. 2 illustrates an example process flow.
FIGS. 3A-II illustrate example user interfaces.
DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS
The present invention is directed to interactive training, and in particular, to methods and systems for computerized interactive skill training. Certain embodiments also provide blended learning during the period of interactivity (e.g., computer-interaction learning with a human facilitator participating).
As discussed in greater detail below, example embodiments utilize a processor-based training system to drill and train users with respect to acquiring certain information and skills. Certain embodiments train a user to utilize the acquired information and skills in a simulated real-world environment where the user interacts with another person (real or simulated). The user is scored based on their retention of the information, and their ability to provide the information to others in a natural, confident manner (e.g., without hemming and hawing). Further, the training system optionally enables real-time or post-testing scoring of user training sessions. The scoring optionally includes sub-category scoring, consolidated category scoring and scoring which helps the user and others to help the user to focus upon areas that need significant or the greatest improvement. For example, the category can be financial transactions, and sub-categories can include saving deposits, withdrawals, debit cards, checking accounts, and credit cards. Optionally, the consolidated scoring report provides a total score, and subcategory scores report individual scores for corresponding subcategories, so the user and others can better understand the user's performance deficits at a more granular level, and can focus additional training on lower scored subcategories.
Optionally, so as not to make a user unduly nervous regarding the training process and to reduce the fear of obtaining a low score, scoring can be deleted or otherwise not retained in computer accessible memory long term (e.g., it is removed from memory when the user logs out of the training system or earlier) and is optionally not printed out in hardcopy form, and instead a report is generated and stored indicating simply that the user needs help or further training with respect to certain categories/subcategories (e.g., without making a point or grade distinction with respect to other users that also need help).
The training optionally enables a user to provide answers and information to others in real life situations with customers and prospects in a manner that instills trust and confidence with respect to the user. Thus, the training system aids users in internalizing and behaviorally embedding information and skills learned during training. Users are optionally trained to provide information, respond to objections, or ask questions, as appropriate, automatically or almost automatically, without undesirable pauses. The training system as described herein can optionally be configured and used to provide training with respect to academic knowledge, and/or other skill improvement that involves verbalization, including relationship building. Examples of categories include, but are not limited to, some or all of the following: Information (e.g., Product Information, information regarding an academic subject, information about a person, etc.), Objections (e.g., Product Objections, objections to a course of action, etc.), Generic Objections (e.g., generic objections to products or services), Service Queries, Resolving Service Problems, Dealing with Angry Customers, Dealing with Life Events (e.g., Divorce, Marriage, Birth, Death, Travel, etc.), Making Referrals to Specialists, Differentiation and Orientation Statements, Sales, Service and Relationship Technique Drilling.
In an example embodiment, a user, such as a trainee, utilizes a training terminal (e.g., a personal computer, an interactive television, a networked telephone, a personal digital assistant, an entertainment device, etc.) or other device, to access a training program stored locally or remotely in computer readable memory. The user may be requested or required to log-in (e.g., provide a password and/or user name) to access the training program and/or one or more training modules. Optionally, the training system utilizes the log-in information and/or a cookie or other file stored on the user's terminal to determine what training scenarios/modules the user has already taken and/or completed, and/or scenarios/modules so that the system can automatically select the appropriate training module for the user and store the user's interactions and/or score in a database in association with the user identifier.
A library/cataloging function provides users/facilitators the ability to precisely choose the skill training desired. Optionally, the user/facilitator can choose full categories and/or sub-categories. For example, optionally, the system can present the user (or a trainer) with a selection of modules (e.g., in the form of categories/training sequences) the user and/or facilitator selects the desired module. Optionally, a training administrator can select and specify the module to be presented to the user. Optionally, the system automatically selects which module (or segment therein) is to be presented based on the user's training history (e.g., which modules the user has completed and/or based on corresponding module training scores), the user's training authorization, and/or other criteria. The facilitator optionally acts as a coach.
Different training modules are optionally provided which drill users on specific areas within their industry/job function. For example, certain modules may be intended to prepare users for dealing with actual customer or prospect challenges. Modules may be focused on learning about products/services, comparisons of product/services (e.g., comparisons of product/services offered by the user's employer, comparisons of product/services offered by the user's employer with products/services of another company or other entity), handling customer complaints, resolving service issues, providing customers with background information on the company, etc. Modules may also be focused on academic training and/or relationship building, etc.
The following are non-limiting illustrative examples of modules in the form of challenge categories and sub-categories, which can be presented to a user via a user interface.
1. Product Knowledge
    • Checking accounts
    • Savings accounts
    • Credit cards
    • Loans
    • IRA accounts
    • Money market accounts
With respect to product-related mastery training, there can be sub-sections (e.g., 1, 2, 3, or more core sub-sections) with respect to testing different aspects of knowledge for a given subject. For example, a set of sub-sections can include Product Descriptions, Product Usage and Product Objections. Product Descriptions means responding to a more general question/challenge regarding a product such as “tell me about your home equity lines of credit” or other product. Product Usage means responding to a question/challenge regarding details or specifics of a products operation or usage such as, “exactly how does a home equity loan work?, Are there any fees when I use the account, are there any minimum credit balances I must maintain, what are the yearly fees if any?”, etc. Product Objections are those objections and/or resistances expressed by customers and/or prospects regarding a specific product (or service) (e.g., “I don't see the need to get a platinum credit card when a gold or regular credit card will do”, “I won't pay a monthly fee to use checks”). This is distinct from Generic Objections, which are objections and/or resistances that would apply generally to products/services and/or situations (e.g., “I am not interested”, “I don't have time”, “I don't have the money,” “I don't like your organization”, etc.). In an example embodiment, Generic Objections are a separate performance drilling section.
Optionally, a user and/or facilitator can limit a training session to one sub-section for one or more products, or the user and/or facilitator can drill down/navigate to other subsections. For example, randomized challenges presented during a training session can be limited to Product Descriptions and/or optionally there can be integrated product-related mastery training across several or all subsections for a product, such as Product Descriptions, Product Usage, and Product Objections. Optionally, the training is performed in a specified subsection sequence (e.g., first Product Descriptions, second Product Usage, and third Product Objections), or the training can include randomized challenges across the multiple subsections. Optionally, even when there is randomized drilling in a specific subsection (e.g., Product Description), there can be automated links that can be activated by the user and/or facilitator so as to drill down to other subsections (e.g., Product Usage challenges and Product Objection challenges).
As similarly discuss above, challenges can relate to comparisons, such as comparisons of product/services, people, places, etc. By way of illustration, the comparisons can include comparisons of products/services offered by the user's employer, comparisons of products/services offered by the user's employer with products/services of another company or other entity, and/or products and services of two or more other entities other than the user's employer. For example, a challenge can be a question regarding two different products or services, such as:
“What is the difference between a credit card and a debit card?”
“How does an adjustable rate mortgage loan compare with a fixed rate mortgage loan?”
“How does your higher price vacuum cleaner compare with your economy model?”
“How does the sports version of this car compare with the standard version?”
“How does your product compare with that of your competitors?”
“Why is your product more expensive than that of your competitor?”
“How does the service compare at the following three hotel chains?”
Optionally, if the user scores at least a predetermined or other threshold (e.g., “four out of five” “two out of three”, “eight out of nine” or other threshold) with respect to a certain score (e.g., a key elements score, explained in greater detail below), then an automatic linkage is provided to another category (e.g., the Product/Service Usage category) so that the linked to category will next be tested. Likewise, if the user score meets a certain threshold (e.g., “four out of five”) in the Product/Service Usage category, there would be an automatic linkage to still another category (e.g., the Product/Service Objections category). Optionally, if the user fails to meet a designated threshold, additional and/or repeated challenges within the current category are presented to further drill the user in the current category until the user's score improves to meet the threshold (or another specified threshold).
Optionally, if the user did not score at least a specified threshold (e.g., “four out of five”) in a category, the user needs to repeat the category drilling until the user scores the specified threshold before they are able to proceed to the next category.
2. Dealing with Angry Customers/Customer Complaints
    • Waiving fees or service charges
    • Offering upgraded service
    • Closing accounts
3. Resolving Service Problems
4. Background on the Company
Before presenting the actual training user interfaces, the system optionally provides text, audio, and/or video instructions to the user that explain the purpose of the selected training module, how the user is to interact with the training program, the scoring process, and/or other information.
Optionally, a trainer/facilitator is online and/or present when the user/trainee is undergoing training via the system. For example, the trainer may be sitting alongside the trainee, looking at the same terminal screen and/or the trainer may be viewing the screen of a separate trainer terminal which presents similar or the same user interfaces as viewed by the trainee, optionally with additional trainer information (e.g., training answers). Optionally, the trainer provides the trainee with instructions on how to utilize the training system and/or provides real time or delayed scoring of the trainee's training session, as described in greater detail below.
In this example, the system presents a user interface to the trainee that informs the trainee regarding the subject matter of the training session. For example, the system can be used to train a sales and/or service person in a particular industry (e.g., banking, finance, travel agency, automobile sales person, telephony, utilities, etc), train a person on how to relate in a personal situation (e.g., with a spouse, child, sibling, parent, girlfriend/boyfriend, etc.), train a person with respect to academic knowledge, or for other purposes.
Thus, by way of illustration, a trainee may be informed that the training session provides training with respect to credit cards for minors. By way of further illustration, the training may be intended to train a user in how to respond to a more open-ended question. For example, a question or comment may relate to a customer's or prospect's marital status, health, a trip, a residence, and/or a child. The system can train the trainee how to respond to such questions or comments, which can take the following example forms:
“I am getting a divorce (or other life event), what should I do?”;
“I am getting married this summer and a need a loan to pay for the wedding”;
“We are planning to take a cruise, do you have any recommendations on how to finance it?”;
“We are planning to remodel our house, what type of loans do you offer?”;
“How should we be saving money for our child's future education?”
The training optionally trains the user to overcome objections to a course of action proposed by the trainee to a customer/prospect. By way of still further example, the training may be intended to train the user in how to handle a customer that comes in with a service complaint (e.g., “The product does not work as described” or “Why weren't my funds transferred as instructed?”).
The training system optionally provides academic training related to subject matter taught in an a school or employer classroom setting, or otherwise (e.g. “Who are the first five Presidents of the United States; “List, in order, the 10 steps that need to be taken in order to approve a loan request”; “Who should you first attempt to contact in the event there has been a work accident”, etc.). By way of further example, the training can be related to math, history, English, a foreign language, computer science, engineering, medicine, psychology, proper procedures at a place of employment, etc. Thus, for example, the training is not necessarily related to interaction with or challenges from another person, such as a customer, prospect, or family member. The academic training can be used to reinforce training previously provided to the trainee.
In this example, the trainee is also informed of the different stages of a training session. For example, the trainee is informed that pre-study screens (also referred to as user interfaces) will be available, wherein the trainee is provided with key or other elements that the trainee will be expected to know and utilize during the “tested” portion of training session. The trainee is further informed that after the pre-study screen(s), the tested portion will begin. The pre-study screens/user interfaces optionally include text, an audible verbalization of the text, and/or a video or animated figure synchronized with the verbalization.
The pre-study screen(s) is intended to familiarize the trainee with the elements and optionally, only the key elements that are to be tested to educate the trainee and/or so that the trainee will not feel that they are unfairly tested. The training will be in the form of challenges that the trainee is asked to respond to. To overcome or successfully respond to these challenges, there are certain elements (e.g., key elements) that the trainee has to state. The pre-study screen(s) will provide the trainee with the key elements necessary in responding to the challenges. In an example embodiment, clients (e.g., employers of trainees) have the option of deciding on the key elements the trainees should be tested upon and/or the operators/creators of the training system will make these decisions. This enables expectations to be aligned with the training being provided to users.
Optionally, the pre-study screens may be automatically or manually (e.g., by the trainer, user, and/or a system operator) turned off for one or more training sessions for a given user. For example, if the user has already viewed a given pre-study screen, a bypass control (e.g., a button or link) is optionally provided on the trainee and/or trainer user interface prior to displaying the pre-study screen(s), which, when activated causes the pre-study screen(s) to be skipped. A facilitator may elect to activate the by-pass button because the user should already know what the pre-study key elements are based upon prior training.
There may be other reasons for bypassing or not presenting a pre-study screen. For example, not presenting the pre-study screen(s) provides advanced real-world “stress testing”, where when dealing with a person/persons who verbalize a challenge, the recipient of the challenge typically does not have an opportunity to refer to “Pre-Study” materials. Not presenting the pre-study screen (e.g., at pre-determined times or randomly) can be part of a “surprise attack” performance drilling session, which makes the drilling more exciting, and keeps a trainee more alert. In addition, turning of the pre-study screen(s) prior to a scored session enables the system to “pre-test” users' knowledge base before they are able to observe pre-study key element screens. Thus, turning off pre-study screens can serve as a motivator to the user if their score is sub-par, as well as to establish performance baselines. The performance baseline scoring can be compared with scoring after the user has viewed the pre-study screens to provide the user/trainer/company with “before and after” evidence of progress.
Optionally, a time limit may be set on how long the user can view/study a given pre-study screen and/or a set of pre-study screens. A timer (e.g., a count down timer, a color-coded timer (green, yellow, red) or other timer) may be displayed to the user and/or trainer showing how much time is left when a pre-study screen and/or individual limit is presented. This can serve too provide the user with a brief reminder of the key elements which they should have already pre-studied, but not to give them unlimited time to actually learn the key elements from scratch, which they should have already studied and learned. In addition, this serves to limit the time the user spends on the entire module, so that the user does not take “20 minutes” (or other excessive period of time) to complete a module which should have been completed in eight minutes (or other period of time).
Optionally, the ability to achieve the above is on a “key element by key element basis”. That is, as each key element screen is “brought up”, it can only be viewed for a limited period of time before the next key element screen is brought up, and so on. The rate at which the key elements are presented is optionally pre-set or set during the session by the facilitator.
Optionally, the above can be achieved by screens proceeding automatically to the next screen and/or screens “fading to black” at a set time period (e.g., pre-set or set during the session by the facilitator) and then having the next screen automatically coming up, etc.
For example, with respect to product descriptions and product usage, there may be five key elements for product descriptions and five key elements for product usage, but many more elements, benefits and features listed based upon a company's brochures, Web sites and other product informational sources, let alone internal communications.
Optionally, because of the digital nature of the information “reservoirs”, the system enables a company to alter/adapt/change key elements based upon real world realities. For example, if it is discovered that the five existing key elements to answering a particular challenge are not as effective as a different set of key elements in the real world (even a change in a single key element), then the key elements for this particular objection are changed accordingly to match experiential realities.
It may be advantageous in certain instances to emphasize or only train users with respect to certain more important elements (e.g., key elements) as it is recognized that most users will only be able to memorize verbalizations for a limited number of elements, and receivers of information will only be able to process a limited number of elements/messages. Notwithstanding the foregoing, other elements may optionally be mentioned on the pre-study screens.
The pre-study elements (e.g., the key elements) for a category are optionally packaged or presented together so as to better train the trainee to respond to randomized challenges which better mimic real world situations. Additionally, certain elements (e.g., key elements), are kept focused (e.g., unitary, short) to enhance objective scoring and reduce subjective scoring. The key elements may optionally be role modeled, verbalized, with the text of the key elements appearing as they are verbalized, for cognitive and behavioral embedding purposes. The text super-impositions are optionally highlighted as they are displayed.
Optionally, in the pre-study screen(s), there will be related elements that are not as essential as the key elements. Optionally, trainees will be instructed or advised to study these related elements and may be provided extra credit for identifying these when responding to the challenges. The pre-study screen(s) are optionally consolidated to contain the key elements related to various challenges within the same category or module. Optionally, printing of the pre-study screens (e.g., a listing of the elements) is permitted, while the printing of certain other user interfaces (e.g., the challenge user interfaces and/or the model answer user interfaces) is inhibited.
Optionally, different challenges are repeated different numbers of times. Optionally, the selection of the challenges to be repeated and/or the repeat rate are purposely random or pseudo random to mimic the real world experience and to prevent rote memorization. Optionally, the more significant elements are weighted (e.g., by a person crafting the training) so that the more significant elements are or tend to be repeated more often than those elements that are considered less significant. The weightings can be stored in computer readable memory and optionally automatically applied by the system. Optionally, the trainer can manually instruct, via a user interface control, that one or more select challenges are to be repeated (e.g., in a non-randomized fashion).
By way of example, the challenges may include one or more of the following elements and/or other elements:
    • facts regarding the subject matter at hand that the trainee will be expected to know and provide to a customer/prospect;
    • questions the trainee will be expected to ask of a simulated person (e.g., of a customer/prospect, wherein the trainee is a customer service person, in order to determine that customer's needs and/or wants);
    • social conversation intended to put another person at ease and/or to establish a sense of trust.
The challenges may be presented as displayed text, as part of a role playing scenario (e.g., where the user is presented with a scenario involving an animation or person playing an appropriate role, which presents the opportunity for the trainee to state/provide the elements), with the elements presented audibly, textually (optionally in an overlay over the video portion), and/or otherwise.
The elements may be those considered by the trainee's management to be more significant or key so that the trainee is not overly burdened with having to remember all related elements (which can optionally be accessed instead during a real-life interaction via a computer or otherwise, after the trainee has built credibility and trust with an actual customer or prospect, wherein the credibility and trust is the result, at least in part of the trainee's ability to respond without having to read from a list, manual, brochure, etc).
Optionally, the trainee's management or other authorized personnel can specify, select, or modify the elements as desired. By optionally placing the burden on the trainee's management/employer to identify the more significant elements, they are encouraged to better understand and identify what is expected from employees performing a given job function.
Once the pre-training session has been completed, the trainee is informed that the tested portion of the training session is about to begin. The test portion includes a scene having one or more people (real or animated) playing an appropriate role, such as that of a customer, prospect, a family member, or other person as appropriate for the skill being trained. The actors playing the roles can read a script relevant to the field and skill being trained.
The script includes “challenges” (e.g., questions, statements, or information) randomly or pseudo randomly presented, or presented in a predetermined order to the trainee. The challenges are optionally verbalized and/or acted out by a real or animated person/actor. The person or people in the scene may or may not be lipped-synced to a verbalization of the script. The person or people in the scene may be of different ethnicities as selected by the employer, the facilitator, the training system provider, or other entity. The speech patterns and/or accents of the person or people in the scene may be selected by the employer, the facilitator, the training system provider or other entity. The foregoing selection may be made from a menu presented on a terminal (e.g., a menu listing one or more ethnicities and/or accents) and stored in memory.
The trainee is expected to respond with the appropriate element(s) taught during the pre-training session. Optionally, a timer (e.g., a countdown timer) is displayed to the trainee when a challenge is provided. In an example embodiment, the trainee provides the response verbally, but may also do so by typing/writing in the response, by selecting the response from a multiple choice offering, or otherwise. The system automatically and/or in response to a trainer instruction, presented the correct answer to the trainer.
The trainee will then be graded/scored based on one or more of the following elements. The appropriateness/correctness of the element provided by the trainee, the trainee's confidence and/or style when providing the element, and/or the naturalness and/or speed with which the trainee provides the element or any combination thereof. Thus, in an example embodiment, a trainee that provides an appropriate element, but that was too slow or too fast in providing the appropriate element so that it would appear to a real customer as being unnatural, and/or appeared to be/sounded nervous when providing that element, will not receive a “perfect” score for that element. In addition, optionally the trainee will be graded on how closely the text of the element(s) recited by the trainee matches that provided to the trainee on the answer screens, which matches the key elements on the pre-study screens.
Optionally, a countdown timer is set to a certain value during a challenge response period and the trainee has to provide the challenge response before the timer reaches a certain point (e.g., 0 seconds). The current countdown time can be displayed to trainee in a “seconds” format, and/or in other formats related to how much time is remaining (e.g., green for a first amount of time, yellow for a second amount of time, and red for a third amount of time). Optionally, the trainee's score is based on the timer value at the time the trainee provided the response. Optionally, a potential score is displayed which is decremented as the timer counts down, and the trainee is assigned the score displayed when the trainee provides the response. Optionally, a system operator and/or the facilitator can set the initial countdown time and/or the rate of the score reduction. Optionally, the facilitator can reset or change the timer value in real-time or otherwise.
Optionally, key elements for correct answers will be in the “correct order/sequence”. That is, what the client and/or training implementer believes or has identified as the preferred presentation sequence. Optionally, the user is graded on the correctness of the sequence of their answer as well.
By way of illustration, if a bank employee is being trained to recommend appropriate banking services, an actor (real or simulated) may play a bank customer or prospect. The trainee observes the scene, and recites the appropriate element(s) at the appropriate time in response to questions asked by or information offered by the bank customer or prospect which may relate to banking services. For example, if the trainee is being trained to recommend and/or offer information regarding a checking account for minors, the actor may ask questions regarding why a minor needs a checking account, the costs associated with a checking account, and the risks associated with a minor having a checking account. The trainee is expected to respond to the customer questions/information with the element(s) (e.g., the key elements) taught during the pre-training session. Optionally, the trainee is not permitted to refer to notes or other materials (e.g., printed materials, such as books or course handouts) during the testing phase. The trainee's response may be observed (e.g., listened to and/or viewed) in substantially real-time by the trainer. Optionally, the trainee's response is recorded (e.g., a video and/or audio recording) by the terminal or other system for later playback by a trainer and/or the trainee, and/or for later scoring.
Optionally, embedded or otherwise associated with the audio track and/or video track of the scene is computer-readable digital metadata that identifies where/when a challenge occurs in the track, what the challenge is, and/or the element that the trainee is to provide. While the trainer is observing the trainee (in real time or later via a recording), the correct elements are automatically (or optionally in response to a trainer or trainee action) presented to the trainer, optionally using the same text as presented to the trainee during the pre-training phase. For example, the trainer terminal can present the same scene being observed by the trainee, wherein an indication is provided to the trainer as to when the trainee is to provide an element, and the system presents the correct element via a textual overlay with respect to the scene. This enables the trainer to efficiently score the trainee based on the element (if any) provided by the trainee. In addition, the trainer may score the confidence and naturalness/timing with which the trainee provided the element, as similarly discussed above.
The score may be entered by the trainer into a scoring field presented via the trainer terminal. In an example embodiment, the scores are entered and stored in computer memory substantially immediately after the trainee provides a verbal challenge response (e.g., within 15 second, 30 seconds, or 60 seconds). Optionally, several scoring fields are provided so that the trainer can enter scores for different aspects of the trainee's provision of the element. For example, there may be a “correct element” field, a “level of confidence” field, a “naturalness of response” field, and/or a “timing of response” field. Optionally, the field may enable the trainer to enter (or select) a number score (e.g., 1-5), a letter score (e.g., A-F), a phrase (e.g., excellent, good, fair, poor), or other score. Optionally, scoring icons (e.g., circular scoring icons) are provided on the answer screens. The facilitator will click on a scoring icon to provide the trainee a point (or other score) for identifying a key element. When the facilitator clicks on a scoring icon, the icon, originally white, will turn green to signify the user has correctly identified a key element. Other colors/indicators can be used as well. If the facilitator clicks on these scoring icons in error, they have the option of re-clicking on the scoring icon(s) (or otherwise correcting the scoring error). This will return the icon to white and no points will be calculated.
Optionally, the system automatically scores one or more aspects of the trainee's performance. For example, the system can detect (e.g., via sound received via a microphone coupled to the trainee terminal, wherein input received via the microphone is translated into a digital value) how long it takes the trainee to begin providing an element after a “challenge” (as identified to the training system via the metadata discussed above), and score the speed of the trainee's response and/or provide the actual elapsed time between the challenge and the trainee's response and/or present the elapsed time to the trainer. The scoring of the immediacy of response and confidence, rather than solely providing a blended score of the two, aids the user/trainer in better understanding more precisely the precise learning and performance deficits of the trainee. The trainer can also provide textual/verbal comments (or optionally select predefined comments presented to the trainer via a user interface) regarding the trainees confidence and the naturalness of the trainees response. For example, the trainer's user interface can include a text field via which the trainer can enter comments.
Optionally the scores for two or more aspects of trainee's provision of an element (which will sometimes be referred to as an “answer”) may be combined into a single score (e.g., as an average score, which is optionally weighted). For example, if the trainee received a score of 5 for appropriateness/correctness of the element a score of 3 for the trainee's confidence, and a score of 2 for the trainee's naturalness, an average score of 3.33 may be calculated and assigned to the trainee's answer. Different aspects of the trainee's provision of an element can be assigned corresponding different weightings. By way of example, the combined score can be calculated using the following weighted average formula (although other formulas may be used as well).
TotalMaximumScore((W1Score1/MaximumPossible1)+ . . . Wn-1(Scoren-1/MaximumPossiblen-1)+Wn(Scoren/MaximumPossiblen))
Where TotalMaximumScore is the maximum score that can be awarded for the answer, W is the weighting for a corresponding aspect of the answer, Score is the score awarded for a corresponding aspect, and MaximumPossible is the maximum possible score that can be assigned for the corresponding aspect.
For example, using the above formula, if the correctness of the trainee's answer is assigned a weighting of 0.5, and confidence and naturalness are each assigned a weighting of 0.25, then if the trainee received a score of 5 out of 5 for appropriateness/correctness of the element, a score of 3 out of 5 for the trainee's confidence, and a score of 2 out of 5 for the trainee's naturalness, the system calculates and assigns to a the trainee's answer a score of 3.75 out of a maximum of 5.
A total score can be assigned for multiple answers provided by the trainee using an average, a weighted average, or other calculation based on the scores received for individual answers and/or aspects thereof. Optionally, the score for a given answer and the current total is automatically calculated in substantially real time as the trainee submits answers (or fails to submit answers), with the running total displayed via the trainer terminal and/or the trainee terminal. Optionally, at the end of a training session, the training system provides the scores to the trainer and/or the trainee via an electronic and/or hardcopy report generated by the system.
Optionally, scoring can be by each sub-category or for a total category. If for a total category, a final combined score from sub-categories is presented (e.g., automatically presented or in response to a trainer command).
Optionally, a best to worst rank order scoring (or worst to best rank order scoring) by sub-categories will be presented. This will allow the user/facilitator to know where to focus subsequent training based upon strengths and weaknesses. Optionally, the specific sub-category that should be studied/repeated is displayed. Optionally, the user/facilitator can limit the scoring report so that only the scores for those sub-categories that the user needs further training on (e.g., as determined by the system based on the failure of the user to score at least a certain specified threshold) are reported to the user/facilitator.
Optionally, during the tested portion of the training session, different challenges will be repeated a different number of times. Optionally, the selection of the challenges to be repeated and/or the repeat rate are random or pseudo random. Optionally, the more significant or otherwise selected challenges are weighted so that they are or tend to be repeated more often than those challenges that are considered less significant. This weighting promotes the testing of more significant and/or more difficult to acquire skills/information.
Optionally, after the trainee has provided an answer (e.g., after the answer has been scored and/or after the trainee has completed a module or tested training portion thereof), the trainee is presented with a model answer, with the corresponding element displayed and/or verbalized. When verbalized, optionally the verbalization is provided with a confident sounding voice that the user should emulate. Optionally, the key elements provided in the answers are bolded, highlighted, underlined, or otherwise visually emphasized as compared to the sentence/phrase structure in which they are incorporated. The key elements provided in the model answer are optionally role modeled, verbalized, with the text of the key elements appearing in a super-imposed manner as they are verbalized, for cognitive and behavioral embedding purposes. The text super-impositions are optionally highlighted as they are displayed.
Optionally, the model answer is automatically presented and/or is presented in response to a trainee instruction (e.g., issued via a user interface presented via the trainee terminal). Optionally, first the element is displayed, and then the model answer is provided (e.g., textually and/or verbalized) with the element still displayed. Where there is more than one element, optionally the elements are introduced one at a time, until all the relevant elements are displayed. The revealed elements correspond to the model answer. Optionally, the trainee can take notes while the element and model answer are presented.
Optionally, a “notes” field is presented on the trainee terminal wherein the trainee can enter notes, which will then be saved in computer memory. The notes can optionally be printed and/or later accessed by the trainee. Optionally, via a user interface control, the trainer can instruct the system to repeat a selected challenge or module. Optionally, the training system automatically repeats the challenge and/or module if the trainee's score falls below a threshold defined by the system, the trainer, the trainee's employer, the trainee and/or other designated person. For example, optionally a challenge and/or module is repeated if the trainee received less than a perfect score to thereby better drill the trainee to be able to provide correct answers that include the appropriate significant elements, without hesitation and in a confident manner.
Optionally, during a training session, the system automatically presents the trainee with one or more challenges that the trainee had successfully mastered (e.g., as determined by the trainee's score) in one or more previous training sessions. Such “surprise drilling sections” help test and reinforce the trainee's retention of information and skills obtained during training.
Optionally, a training session can be presented in the form of a game to help keep the trainee's interest and/or to enhance the training impact. For example, when the trainee receives a score above a specified threshold, something pleasing happens (e.g., a certain tune is played, a certain image/video is displayed, a piece of an electronic puzzle is awarded, the trainee earns points/weapons/attributes that can be used in an electronic game, etc.). Optionally, the training can be presented in a format wherein the trainee must answer questions correctly (e.g., receive a predetermined score) in order to defeat an adversary (e.g., a simulated robot or alien). Optionally, there can be multiple players participating in a game, where if the first to answer is incorrect, then others have the chance to answer and score.
Optionally, a user's verbalized responses are recorded by hitting a “record” button. These recorded responses are immediately (or in a delayed fashion) played back via a playback button. The objective in this example embodiment is to provide the user with substantially instant feedback about how the user sounds from a style and/or attitude perspective. Optionally, substantially immediately after the playback, the facilitator/trainer asks questions of the user regarding the user's perception of the user's style and/or attitude. Examples of these questions are:
    • How do you think you sounded?;
    • Do you think you can across as confident and knowledgeable?
    • Would you have been convinced by your response as a customer or prospect?;
    • How could you have improved?, etc.
Optionally, once the playback of the user's recorded segment is complete, there can be an automatic default to the questions which are “asked” by the training system. That is, the questions are verbalized by a pre-recorded or synthesized voice at substantially the same time as text is displayed. Optionally, each question is “asked” separately. Optionally, two or more questions are asked together. After the response and/or discussion between the user and facilitator, the user/facilitator presses a “proceed” button (or other corresponding control) and the next question is asked, and so on.
Optionally, there is an option for re-recording a user response without saving the initial recorded segment via a control on the trainee and/or facilitator user interface.
Optionally, via a control on the trainee and/or facilitator user interface (e.g., a save recording icon that can be activated by the trainee and/or facilitator), there is an option for saving the recording as a “self-referenced role model” which the user and/or facilitator can later access as an example of a good response.
Optionally, there can be standard questions (e.g., 1, 2, 3, 4, 5, or more questions) with respect to the self-recording option, or these questions can be customized. For example, in order to remove the burden from the facilitator, once the user hears herself, and the system queries the user regarding the user's performance, the same questions can be asked each time (e.g., “How do you think you sounded?”, “How could you improve your response?”, etc.) or the system instead can ask different questions for different types of challenges. (e.g., for an objection, the system could ask “Do you feel you have overcome the customer's objections?”).
Example embodiments will now be described with reference to the figures. Throughout the following description, the term “Web site” is used to refer to a user-accessible network site that implements the basic World Wide Web standards for the coding and transmission of hypertextual documents. These standards currently include HTML (the Hypertext Markup Language) and HTTP (the Hypertext Transfer Protocol). It should be understood that the term “site” is not intended to imply a single geographic location, as a Web or other network site can, for example, include multiple geographically distributed computer systems that are appropriately linked together. Furthermore, while the following description relates to an embodiment utilizing the Internet and related protocols, other networks, such as networks of interactive televisions or of telephones, and other protocols may be used as well.
In addition, unless otherwise indicated, the functions described herein are preferably performed by executable code and instructions stored in computer readable memory and running on one or more general-purpose computers. However, the present invention can also be implemented using special purpose computers, other processor based systems, state machines, and/or hardwired electronic circuits. Further, with respect to the example processes described herein, not all the process states need to be reached, nor do the states have to be performed in the illustrated order. Further, certain process states that are described as being serially performed can be performed in parallel.
Similarly, while the following examples may refer to a user's personal computer system or terminal, other terminals, including other computer or electronic systems, can be used as well, such as, without limitation, an interactive television, a networked-enabled personal digital assistant (PDA), other IP (Internet Protocol) device, a cellular telephone or other wireless terminal, a networked game console, a networked MP3 or other audio device, a networked entertainment device, and so on.
Further, while the following description may refer to a user pressing or clicking a key, button, or mouse to provide a user input or response, the user input can also be provided using other apparatus and techniques, such as, without limitation, voice input, touch screen input, light pen input, touch pad input, and so on. Similarly, while the following description may refer to certain messages or questions being presented visually to a user via a computer screen, the messages or questions can be provided using other techniques, such as via audible or spoken prompts.
One example embodiment utilizes a computerized training system to enhance a trainee's listening comprehension. For example, the training can be delivered via a terminal, such as a stand-alone personal computer. The training program may be loaded into the personal computer via a computer readable medium, such as a CD ROM, DVD, magnetic media, solid state memory, or otherwise, or downloaded over a network to the personal computer.
By way of further example, the training program can be hosted on a server and interact with the user over a network, such as the Internet or a private network, via a client computer system or other terminal. For example, the client system can be a personal computer, a computer terminal, a networked television, a personal digital assistant, a wireless phone, an interactive personal media player, or other entertainment system. A browser or other user interface on the client system can be utilized to access the server, to present training media, and to receive user inputs.
As will be described in greater detail below, in one embodiment, a training system presents a scenario to a user via a terminal, such as a personal computer or interactive television. The scenario can be a pre-recorded audio and/or video scenario including one or more segments. The scenario can involve a single actor or multiple actors (e.g., a human actor or an animated character) reading a script relevant to the field and skill being trained. For example, the actors may be simulating an interaction between a bank teller or loan officer and a customer. By way of further example, the simulated interaction can instead be for in-person and phone sales or communications. By way of still further example, the actors may be simulating an interaction between a parent and a child. Optionally, rather than using actors to read a script, the pre-recorded scenario can involve a real-life unscripted interaction.
FIG. 1 illustrates an example networked training system including a Web/application server 110, used to host the training application program and serve Web pages, a scenario database 112, that stores prerecorded scenario segments, and a user database 114 that stores user identifiers, passwords, training routines for corresponding users (which can specify which training categories/scenarios are to be presented to a given user and in what order), training scores, recordings of training sessions, and user responses provided during training sessions. The training system is coupled to one or more trainee user terminals 102, 104, and a trainer terminal 106 via a network 108, which can be the Internet or other network.
FIG. 2 illustrates an example training process. At state 202, the trainee and/or the trainer logs in to the training system via a training terminal. The training system can utilize the log-in information to access account information for the trainee and/or trainer. The account information optionally includes an identification of the training categories/modules that the trainee and/or trainer are authorized to access. At state 204, a training module is selected. For example, a training category/module can be selected by the trainee or the trainer from a menu of modules presented on the training terminal, wherein the menu includes modules that the trainee/trainer are authorized to access. Optionally, the system automatically selects the module based on the trainee's training history (e.g., the modules that the trainee has previously utilized and/or the trainee's scores).
At state 206, instructions regarding the use of the training system are displayed to the trainee and/or trainer. At state 208, in response to a trainee or trainer command, a preparatory training session begins. The preparatory training session presents information, questions, statements, and the like (sometimes referred to herein generally as materials), related to the training subject, where the trainee is expected to utilize some or all of the presented materials during the subsequent testing session.
At state 210, in response to a trainee or trainer command, the tested portion of the training begins. The trainee is presented with a scenario related to the materials taught during the preparatory portion of the training. The scenario will include one or more challenges by an actor pretending to be a customer, prospect, or other person relevant to the training subject. The scenario may provide via a video recording of a person or animation (e.g., a FLASH animation). By way of example, the challenges can be in form of a question (e.g., a question about a product or service, a question asking advice regarding a relevant matter, etc.) or a statement (e.g., a statement regarding the customer's current condition or plans, or regarding a product or service).
At state 212, the trainee provides an answer in response to the challenge. At state 214, the trainer and/or the training system score the answer. The correct elements are displayed to the trainer with corresponding scoring icons (e.g., circular scoring icons, or other desired shape) that the trainer can use to score the user. The answer may be scored on the correctness of the answer, the quickness with which the answer was given, the naturalness of the answer, and/or the confidence and style in which the answer is given. At state 216, the model answer is presented via multimedia (e.g., optionally audibly optionally in conjunction with the video portion of the scenario) to the trainee on the training terminal, optionally with the corresponding materials presented textually. The elements of the material may be presented in overlay format over the module scenario video/animation.
At state 218, a determination is made as to whether the testing portion of the training session has been completed. If not, the process proceeds to state 210, and the next scenario is presented. The next scenario can be randomly or pseudo-randomly selected, or the system optionally may enable the trainer and/or trainee to select the next scenario. The next scenario is optionally a repeat of a previously presented scenario (e.g., from the current module or from another module). The next scenario is optionally selected based on the trainee's score on a previous scenario.
If, at state 218, a determination is made that the testing portion of the module has been completed, the process proceeds to state 220, and a test report is generated optionally including the scores for each challenge, a total score for each challenge, and/or a total score for the module. At state 222, the process ends or the trainee/trainer selects another module.
Example user interfaces will now be described. An example user interface can provide an introduction to the training system, including instructions on how to access training system drill categories. A link to a log in page is optionally provided and the user is instructed to log into the system and choose a training module with the corresponding desired category and sub-category of drilling. A user interface is provided via which the user or facilitator has the option of clicking on an “Access Pre-Study Screen” icon. The Pre-Study user interface contains important or key elements the user needs to know in order to correctly answer the upcoming challenges. Optionally, the user can skip the pre-study user interface (e.g., if the user is already familiar with the category and sub-category and/or does not feel the need to view the pre-study user interface) by clicking a “Skip Pre-Study & Hear Challenge” icon or the like.
The user is further informed that when the simulated character stops the user is to provide a first person answer/response to the challenge, and that the correct answers are based upon the elements (e.g., the key elements provided via the pre-study user interface) chosen by the user's company or by the creators of the training system. The user is informed that when the user stops verbalization of the challenge response, the user or the facilitator should click on an “Answer” icon to obtain the correct answer(s). In an example embodiment, the user is informed that the training system will now have a simulated figure verbalize a “challenge” (e.g., a statement or question). The user then responds to the challenge.
The system can now display elements (e.g., the key elements) of the guideline correct answer, which matches or substantially matches what they have learned in previous training and/or from the Pre-Study screen(s). The answer user interfaces can display the elements and/or scores to the user/facilitator jointly, or the facilitator alone. The scoring can include a score for the elements recited by the user, a score of immediacy of response, and a score for confidence and style
Optionally, a user interface is provided where the user/facilitator can instruct the system via a control (e.g., “Click to hear Model Answer” icon) to verbalize the model answer with the key elements graphically super-imposed on the screen. The elements (e.g., the key elements) are super-imposed and build up one at a time to display the elements previously revealed on the Pre-Study Screen(s). The revealing of each of these elements correlates with the verbalized Model Answer. Optionally, a control is provided via which the user/facilitator has the ability to repeat this particular “challenge” if they choose to. For learning purposes, if the user gets anything less than a “perfect score” (or other designated score), then they should repeat the sub-category to better ensure they can produce a fluid answer that contains the key elements, without hesitation and in a confident manner.
Another user interface is provided including a “proceed” control, which when activated, causes the system to verbally state another randomized challenge and the process continues. The training module will inform the user and facilitator when all of the challenges within the sub-category have been completed. The user and facilitator can elect to repeat the sub-category or move on to a different sub-category and/or category. Such repetitive drilling will better ensure trainees will learn the materials.
FIGS. 3A-II illustrate example user interfaces which can be presented via a Web browser or other application. Additional, fewer, or different user interfaces can be used in other example embodiments. The language provided in the example user interfaces are not intended to limit the invention, but are merely examples of language that can be used. Other language and terminology can be used as well. The example interfaces include controls (e.g., in the form of underlined links, buttons, or otherwise) that can be used to navigate the training session (e.g., proceed, next page, click for answer, etc.). Optionally, some or all of the text displayed via the user interface can instead or in addition be verbalized. Optionally, an animated figure/face or prerecorded actual person verbalizes or appears to verbalize some or all of the text or script.
FIG. 3A illustrates an example introduction to the training system, including a listing of example training subject categories and sub-categories. FIG. 3B illustrates example instructions that are optionally presented to a facilitator and/or user which describe how to access a training module. In this example, the instructions provide a Website URL (uniform resource locator) via which the trainer or trainee can log-in and access the training program. The user is further instructed to choose the desired training category/subcategory. The instructions further instruct the trainee/trainer how to initiate the preparatory/pre-training phase of the training module, and provide information as to what to expect during the preparatory/pre-training phase.
The example instructions indicate that after the preparatory/pre-training phase, the “challenge” phase of the training module will be initiated, wherein a video/animated character will verbalize a challenge (e.g., a statement or question). The trainee is instructed how to respond to the challenges and how the trainee's answers will be scored. Instructions are further provided on how to access model answers, and what to expect while accessing model answers. In addition, the trainer/trainee are informed that the scenario can be repeated, that additional challenges will be presented, and that the module can be repeated.
FIG. 3C illustrates an example user interface informing the trainee that a practice training session is about to begin. The trainee is further informed that the facilitator has logged in, is participating in the training session, and has selected a training category/subcategory (e.g., alphabets/American). FIG. 3D illustrates a user interface that provides an explanation to the trainee regarding the upcoming challenge testing, and provides instructions on how the trainee is to respond to the challenges (e.g., in the first person, as soon as the trainee hears a challenge). Optionally, a figure (e.g., an animated figure or actual person) is displayed which appears to verbalize the instructions.
FIG. 3E illustrates an example challenge in the form of a question. FIGS. 3F-3G illustrate example scoring user interfaces, wherein scores (e.g., numerical scores) are assigned for the correctness of a trainee response to a challenge, the timeliness of the response, and the confidence with which the response was delivered. FIG. 3F illustrates an example answer-scoring user interface used to score the correctness of trainee's challenge response. The scoring user interface can be completed by the trainee, the facilitator, or both jointly, as permitted by the training rules. FIG. 3G illustrates a scoring user interface used to score the timeliness of the trainee's challenge response (e.g., whether the response was undesirably delayed or immediate). FIG. 3H illustrates a scoring interface used to score the confidence with which the trainee responds to the challenge (e.g., not confident, somewhat confident, confident), with a different number of points assigned based on the level of confidence.
FIG. 3I illustrates a user interface that explains that a model answer will be presented listing the elements that should have been provided in response to the challenge. The trainee is further informed that notes may be taken (e.g., to enhance the trainee's performance). FIG. 3J illustrates the correct elements, wherein the elements are optionally sequentially and cumulatively displayed and verbalized, so that when the last element is displayed all the elements are displayed. FIG. 3K illustrates the calculated total score for the trainee's challenge response. A “repeat scenario” control is provided in this example, which when activated by the trainee/trainer causes the challenge to be repeated. A “proceed” control is provided that, when activated, causes the system to present the next challenge. Optionally, the next challenge is randomly/pseudo-randomly selected by the training system. Optionally, the trainer or trainee selects the next challenge.
FIG. 3L illustrates the presentation of the next challenge. FIGS. 3M-3O illustrate example scoring user interfaces for the second challenge, wherein scores (e.g., numerical scores) are assigned for the correctness of a trainee response to a challenge, the timeliness of the response, and the confidence with which the response was delivered. FIG. 3P illustrates a user interface that explains that a model answer will be presented listing the elements that should have been provided in response to the challenge. The trainee is further informed that notes may be taken (e.g., to enhance the trainee's performance). FIG. 3Q illustrates the key elements, wherein the element is optionally verbalized. FIG. 3R illustrates the calculated total score for the trainee's challenge response.
FIG. 3S illustrates an example user interface congratulating the trainee on completing the challenges in the present module. The trainee is further instructed to activate a “proceed” control to further utilize the current training module, or to activate and “exit module” control in order to proceed to the main menu (e.g., where the trainer/trainee can select another training module).
FIG. 3T illustrates a user interface that explains to the trainee that the trainee will not participate in a training module in a different category (product knowledge/checking accounts). FIG. 3U illustrates a user interface that provides an explanation to the trainee regarding the upcoming challenge testing and provides instructions on how the trainee is to respond to the challenges. Optionally, a figure is displayed which appears to verbalize the instructions. FIG. 3V illustrates a user interface where another challenge is presented, this one related to a checking account for minors. FIGS. 3W-3Y illustrate example scoring user interfaces for the second challenge, wherein scores (e.g., numerical scores) are assigned for the correctness of a trainee response to a challenge, the timeliness of the response, and the confidence with which the response was delivered.
FIG. 3Z illustrates a user interface that explains that a model answer will be presented listing the elements that should have been provided in response to the challenge. The trainee is further informed that notes may be taken (e.g., to enhance the trainee's performance). FIG. 3AA illustrates the correct elements, wherein the elements are optionally sequentially and cumulatively displayed and verbalized, so that when the last element is displayed, all the elements that the trainee should have provided are displayed. FIG. 3BB illustrates the calculated total score for the trainee's challenge response. A “repeat challenge(s)” control is provided in this example, which when activated by the trainee/trainer causes the challenge to be repeated, but, optionally, only after the final scores are calculated. A “see score” control is provided that, when activated, causes the current score for the challenge to be presented.
FIG. 3CC illustrates another example category selection user interface via which the trainee or the facilitator can select a training category (e.g., a different product or service to be trained on). Once the training category is selected, the example user interface illustrated in FIG. 3DD is presented via which a category focus can be selected (e.g., Product/Service Description, Product/Service, Product/Service Objections).
FIGS. 3EE-3GG illustrates another set of example user interfaces providing key elements during the pre-training stage. The corresponding model answers are optionally the same as the key elements. The key elements and model answers can be presented textually and/or audibly. Certain portions of the key elements and model answers are emphasized to highlight the important concepts.
FIG. 3HH illustrates another example user interface providing scoring for a challenge response. FIG. 3II illustrates another example user interface providing a scoring summary for a training session. The summary includes scoring for drilling in three different product categories. In this example, scores are displayed in rank order with the weakest/lowest score on top and strongest/highest score at the bottom within each component of the score.
Optionally, a pre-recorded or synthesized voice (e.g., a “signature voice”) tells the users how they did (e.g., a voice that simulates a robot speaking in a stilted manner, pirate, or other not real or fictional character). This includes an indication as to how the user did vs. others and/or vs. “the machine”. That is, certain scores would trigger a verbalization such as “you have defeated me!” while others might trigger “I have defeated you . . . try again”. Optionally, the indications can instead or in addition to provided via text or with a “traditional” voice as well.
Optionally, there can be pre-scoring when the user participates alone, which can then be compared to scoring when the user works with a facilitator.
Optionally, training can be done in a self-study mode without a trainer/facilitator. Thus, the system can be used for self-coaching.
Optionally, a replay option is provided for each category and/or sub-category (or a subset thereof) for performance improvement.
Optionally, instructions will be verbalized.
Thus, as discussed above, certain embodiments teach and train a user to utilize information and skills in a simulated real-world environment. The user optionally undergoes extensive testing, where their performance is scored based on their retention of the information, and their ability to provide the information to others in a natural, confident manner. Thus, the training system aids users in internalizing and behaviorally embedding information and skills learned during training. Users are optionally trained to provide information, respond to objections, or ask questions as appropriate almost automatically, without undesirable pauses.
It should be understood that certain variations and modifications of this invention would suggest themselves to one of ordinary skill in the art. The scope of the present invention is not to be limited by the illustrations or the foregoing descriptions thereof.

Claims (57)

What is claimed is:
1. A computerized training system comprising non-transitory computer readable memory storing programmatic code, that when executed by a computing device is configured to perform operations comprising:
receive a selection of at least one training subject;
present via a first study session user interface on a trainee terminal during a study session a challenge, wherein the challenge is to be audibly responded to when delivered by a simulated customer or prospect during a first practice session, the challenge corresponding to the selected training subject, and role model phrases including key elements that a trainee is to be tested on, wherein the challenge and role model phrases are presented as written text and the key elements are visually emphasized as compared to the phrases in which they are incorporated, and wherein the role model phrases provide a model of a response to the challenge and wherein the role model phrases are visually separated from each other when presented;
textually present via a second study session user interface on the trainee terminal the challenge and the key elements, wherein the key elements are not embedded in surrounding phrases and are visually separated from one another and wherein the challenge is to be audibly responded to when delivered by the simulated customer or prospect during the first practice session,
present a video or animation of a character that audibly recites:
the key elements without the surrounding phrases, or
the role model phrases including the key elements, or
both the key elements without the surrounding phrases and the role model phrases including the key elements;
via a first practice session user interface, provide the trainee with the challenge from the simulated customer or prospect, wherein the challenge is provided textually and via a video or animation of the real or simulated customer or prospect, and wherein the trainee is to provide a verbal, audible response to the challenge;
after receipt of an indication that the trainee has audibly responded to the challenge presented via the first practice session user interface, present the challenge, provided from the simulated customer or prospect via the first practice session user interface, and the corresponding key elements in at least textual form to the trainee; and
provide navigation controls via which the trainee can navigate directly between user interfaces, including from the first practice session user interface back to a previously presented study session user interface.
2. The system as defined in claim 1, wherein the programmatic code is further configured to:
store a first challenge score related to:
correctness of the audibly provided trainee challenge response, or
completeness of the audibly provided trainee challenge response, or
both the correctness of the audibly provided trainee challenge response and the completeness of the audibly provided trainee challenge response;
wherein the first score is based at least in part on a correspondence of the trainee challenge response to corresponding key elements presented during the study session.
3. The system as defined in claim 2, wherein the programmatic code is further configured to:
receive and store a second challenge score related to the audibly verbalized trainee response, wherein the second challenge score is related to how quickly the trainee initiated the challenge response; and
receive and store a third challenge score related to the audibly verbalized trainee response, wherein the third challenge score is related to:
confidence with which the trainee verbalized the challenge response, or
style with which the trainee verbalized the challenge response, or
both the confidence with which the trainee verbalized the challenge response and the style with which the trainee verbalized the challenge response.
4. The system as defined in claim 3, wherein the programmatic code is further configured to use a formula to generate a cumulative score using at least the following:
the first challenge score related to:
correctness of the challenge response, or
completeness of the challenge response; or
both the correctness of the challenge response and the completeness of the challenge response;
the second challenge score related to how quickly the trainee provided the challenge response; and
the third challenge score related to:
the confidence with which the trainee verbalized the challenge response, or
the style with which the trainee verbalized the challenge response, or
both the confidence with which the trainee verbalized the challenge response and the style with which the trainee verbalized the challenge response.
5. The system as defined in claim 1, wherein the programmatic code is further configured to present each key element in bullet form via the second study session user interface.
6. The system as defined in claim 1, wherein the programmatic code is further configured to provide for display on the trainee terminal each role model phrase in bullet form.
7. The system as defined in claim 1, wherein the programmatic code is further configured to present a control, which when activated by the trainee, causes a user interface to be repeated.
8. The system as defined in claim 1, wherein the training subject is related to:
(a) sales, or
(b) service, or
(c) how to relate in a personal situation, or
(d) academic knowledge,
(e) overcoming objections to a course of action, or
(f) proper procedures at a place of employment, or
any combination of (a), (b), (c), (d), (e), or (f).
9. The system as defined in claim 1, further comprising the trainee terminal, a facilitator terminal used to enter scores, and a server configured to service the first study session user interface, the second study session user interface, and the first practice session user interface.
10. The system as defined in claim 1, wherein a complete response to the challenge includes a plurality of elements describing a product or service.
11. A method of providing training using a computerized system, the method comprising:
receiving, at the computerized system, a selection of at least one training subject;
at least partly in response to receiving the selection, providing, via the computerized system, for display on a trainee terminal during a study session a first study session user interface, the first study user interface including a challenge, wherein the challenge is to be audibly responded to by a trainee when delivered by a simulated customer or prospect during a first practice session, the challenge corresponding to the selected training subject and role model phrases including key elements that the trainee is to be tested on, wherein
the challenge and role model phrases are provided for display as written text,
the key elements are visually emphasized as compared to the phrases in which they are incorporated, and
the role model phrases provide a model of a response to the challenge and wherein the role model phrases are visually separated from each other when presented and wherein the challenge is to be audibly responded to when delivered by the simulated customer or prospect during the first practice session;
providing, via the computerized system, for display on the trainee terminal a second study session user interface, the second study session user interface including the challenge and the key elements in text form, wherein the key elements are not embedded in surrounding phrases and are visually separated from one another and wherein the challenge is to be audibly responded to when delivered by the simulated customer or prospect during the first practice session,
providing, via the computerized system, for display on the trainee terminal a video or animation of a character audibly reciting the key elements without:
the surrounding phrases, or
the role model phrases including the key elements, or
both the surrounding phrases and the role model phrases including the key elements;
providing, via the computerized system, for display on the trainee terminal a first practice session user interface that presents the trainee with the challenge from the simulated customer or prospect, wherein the challenge is provided textually and via a video or animation of the real or simulated customer or prospect, and wherein the trainee is to provide a verbal, audible response to the challenge;
after receipt, at the computerized system, of an indication that the trainee has audibly responded to the challenge presented via the first practice session user interface, providing, via the computerized system, for display on the trainee terminal the challenge, provided from the simulated customer or prospect via the first practice session user interface, and the corresponding key elements in at least textual form to the trainee;
calculating at least one score related to the audibly provided trainee challenge response;
providing the calculated at least one score for display;
providing, via the computerized system, navigation controls via which the trainee can navigate directly between user interfaces, including from the first practice session user interface back to a previously presented study session user interface;
receiving, at the computerized system, navigation instructions via the navigation controls; and
responding, via the computerized system, to the navigation instructions by providing a corresponding user interface.
12. The method as defined in claim 11, further comprising
storing a first challenge score related to:
correctness of the audibly provided trainee challenge response, or
completeness of the audibly provided trainee challenge response, or
both the correctness of the audibly provided trainee challenge response and the completeness of the audibly provided trainee challenge response,
wherein the first challenge score is based at least in part on a correspondence of the trainee challenge response to corresponding key elements presented during the study session.
13. The method as defined in claim 12, further comprising:
receiving and storing a second challenge score related to the audibly verbalized trainee response, wherein the second challenge score is related to how quickly the trainee initiated the challenge response; and
receiving and storing a third challenge score related to the audibly verbalized trainee response, wherein the third challenge score is related to:
confidence, or
style,
or both the confidence and style,
with which the trainee verbalized the challenge response.
14. The method as defined in claim 13, further comprising:
generating a cumulative score using at least the following:
the first challenge score related to:
correctness of the challenge response, or
completeness of the challenge response; or
both the correctness of the challenge response and the completeness of the challenge response;
the second challenge score related to how quickly the trainee provided the challenge response; and
the third challenge score related to:
the confidence and/or style with which the trainee verbalized the challenge response
the confidence with which the trainee verbalized the challenge response, or
the style with which the trainee verbalized the challenge response, or
both the confidence with which the trainee verbalized the challenge response and the style with which the trainee verbalized the challenge response.
15. The method as defined in claim 11, further comprising presenting each key element in bullet form via the second study session user interface.
16. The method as defined in claim 11, further comprising providing for display on the trainee terminal each role model phrase in bullet form.
17. The method as defined in claim 11, further comprising providing for display on the trainee terminal a control, which when activated by the trainee, causes a user interface to be repeated.
18. The method as defined in claim 11, wherein the training subject is related to;
(a) sales, or
(b) service, or
(c) how to relate in a personal situation, or
(d) academic knowledge,
(e) overcoming objections to a course of action, or
(f) proper procedures at a place of employment, or
any combination of (a), (b), (c), (d), (e), or (f).
19. The method as defined in claim 11, wherein one or more user interfaces are served to the trainee terminal by a remote server.
20. The method as defined in claim 11, wherein a complete response to the challenge includes a plurality of elements describing a product or service.
21. A computerized training system comprising non-transitory computer readable memory storing programmatic code, that when executed by a computing device is configured to perform operations comprising:
present via a first study session user interface on a user terminal a challenge, wherein the challenge is to be audibly responded to when delivered by a real or simulated person during a first practice session, the challenge corresponding to a selected training subject and role model phrases including key elements that a user is to be tested on, wherein the challenge and role model phrases are presented as written text and wherein the role model phrases provide a model of a response to the challenge;
textually present via a second study session user interface on the user terminal the challenge and the key elements, wherein the key elements are not embedded in surrounding phrases and are visually separated from one another and wherein the challenge is to be audibly responded to when delivered by the real or simulated person during the first practice session;
present the user, via a first practice session user interface on the user terminal, with the challenge both textually and via a video or animation of the real or simulated person that appears to speak the challenge, wherein the user is to provide a verbal, audible response to the challenge;
after receipt of an indication that the user has audibly responded to the challenge presented delivered by the real or simulated person via the first practice session user interface, present the challenge and the corresponding key elements in at least textual form to the user; and
provide navigation controls via which the user can navigate directly between user interfaces, including from the first practice session user interface back to a previously presented study session user interface.
22. The system as defined in claim 21, wherein the programmatic code is further configured to visually emphasize the key elements in the first study session user interface as compared to the phrases in which they are incorporated.
23. The system as defined in claim 21, wherein the programmatic code is further configured to visually separate the role model phrases from each other when presented via the first study session user interface.
24. The system as defined in claim 21, wherein the programmatic code is further configured to present a video or animation of a character that audibly recites the key elements without:
the surrounding phrases, or
the role model phrases including the key elements, or
both the surrounding phrases and the role model phrases including the key elements.
25. The system as defined in claim 21, wherein the person is a simulated customer or prospect.
26. The system as defined in claim 21, wherein the programmatic code is further configured to:
store a first challenge score related to
correctness of the audibly provided user challenge response, or
completeness of the audibly provided user challenge response, or
both the correctness and completeness of the audibly provided user challenge response,
wherein the first challenge score is based at least in part on a correspondence of the user challenge response to corresponding key elements presented during the study session.
27. The system as defined in claim 26, wherein the programmatic code is further configured to:
receive and store a second challenge score related to the audibly verbalized user response, wherein the second challenge score is related to how quickly the user initiated the challenge response; and
receive and store a third challenge score related to the audibly verbalized user response, wherein the third challenge score is related to:
confidence, or
style,
or both the confidence and style
with which the user verbalized the challenge response.
28. The system as defined in claim 27, wherein the programmatic code is further configured to use a formula to generate a cumulative score using at least the following:
the first challenge score related to:
the correctness of the challenge response, or
completeness of the challenge response,
or both the correctness and completeness of the challenge response;
the second challenge score related to how quickly the user provided the challenge response; and
the third challenge score related to:
the confidence with which the user verbalized the challenge response, or
the style with which the user verbalized the challenge response, or
both the confidence and style with which the user verbalized the challenge response.
29. The system as defined in claim 21, wherein the programmatic code is further configured to present:
each key element; or
each role model phrase, or
both each key element and each role model phrase,
in bullet form.
30. The system as defined in claim 21, wherein the programmatic code is further configured to present a control, which when activated by the user, causes a user interface to be repeated.
31. The system as defined in claim 21, wherein the training subject is related to:
(a) sales, or
(b) service, or
(c) how to relate in a personal situation, or
(d) academic knowledge,
(e) overcoming objections to a course of action, or
(f) proper procedures at a place of employment, or
any combination of (a), (b), (c), (d), (e), or (f).
32. The system as defined in claim 21, further comprising the user terminal, a facilitator terminal used to enter scores, and a server configured to serve one or more user interfaces.
33. The system as defined in claim 21, wherein a complete response to the challenge includes a plurality of elements describing a product or service.
34. A method of providing training using a computerized system, the method comprising:
providing, via the computerized system, for display on a user terminal during a study session a first study session user interface, the first study user interface including a challenge wherein the challenge is to be audibly responded to by a user when delivered by the person during a first practice session, the challenge corresponding to a first training subject and role model phrases including key elements that the user is to be tested on, wherein
the challenge and role model phrases are provided for display as written text, and
the role model phrases provide a model of a response to the challenge;
providing, via the computerized system, for display on the user terminal a second study session user interface, the second study session user interface including the challenge and the key elements in at least text form, wherein the key elements are not embedded in surrounding phrases and are visually separated from one another and wherein the challenge is to be audibly responded to when delivered by the person during the first practice session,
providing, via the computerized system, for display on the user terminal a video or animation of a character audibly reciting at least the key elements;
providing, via the computerized system, for display on the user terminal a first practice session user interface that presents the user with the challenge from a person, wherein the challenge is provided textually and via a video or animation of the person, and wherein the user is to provide a verbal, audible response to the challenge;
after receipt of an electronic indication at the computer system that the user has audibly responded to the challenge presented via the first practice session user interface, providing, via the computerized system, for display on the user terminal the challenge, provided from the person via the first practice session, and the corresponding key elements in at least textual form to the user;
calculating at least one score related to the audibly provided trainee challenge response;
providing the calculated at least one score for display;
providing, via the computerized system, navigation controls via which the user can navigate directly between user interfaces, including from the first practice session user interface back to a previously presented study session user interface;
receiving, at the computerized system, navigation instructions via the navigation controls; and
responding, via the computerized system, to the navigation instructions by providing a corresponding user interface.
35. The method as defined in claim 34, further comprising emphasizing the key elements as compared to the phrases in which they are incorporated via the first study session user interface.
36. The method as defined in claim 34, further comprising visually separating the role model answers from each other when presented via the first study session user interface.
37. The method as defined in claim 34, wherein the step of providing for display on the user terminal the video or animation of a character audibly reciting the key elements is performed without the surrounding phrases being audibly recited.
38. The method as defined in claim 34, wherein the step of providing for display on the user terminal the video or animation of a character audibly reciting the key elements is performed with the surrounding role model phrase, including the key elements, being audibly recited.
39. The method as defined in claim 34, wherein the person is a simulated customer or prospect.
40. The method as defined in claim 34, further comprising:
storing a first challenge score related to:
correctness of the audibly provided user challenge response, or
completeness of the audibly provided user challenge response, or
both the correctness and completeness of the audibly provided user challenge response;
wherein the first challenge score is based at least in part on a correspondence of the user challenge response to corresponding key elements presented during the study session.
41. The method as defined in claim 40, further comprising:
receiving and storing a second challenge score related to the audibly verbalized user response, wherein the second challenge score is related to how quickly the user initiated the challenge response; and
receiving and storing a third challenge score related to the audibly verbalized user response, wherein the third challenge score is related to:
confidence, or
style,
or both the confidence and style,
with which the user verbalized the challenge response.
42. The method as defined in claim 41 further comprising:
generating a cumulative score using at least the following:
the first challenge score related to:
correctness of the challenge response, or
completeness of the challenge response; or
both the correctness of the challenge response and the completeness of the challenge response;
the second challenge score related to how quickly the user provided the challenge response; and
the third challenge score related to: the confidence and/or style with which the trainee verbalized the challenge response
the confidence with which the trainee verbalized the challenge response, or
the style with which the trainee verbalized the challenge response, or
both the confidence with which the trainee verbalized the challenge response and the style with which the trainee verbalized the challenge response.
43. The method as defined in claim 34, further comprising presenting:
each key element, or
each role model phrase, or
both each key element and each role model phrase,
in bullet form via the second study session user interface.
44. The method as defined in claim 34, further comprising providing for display on the user terminal a control, which when activated by the user, causes a user interface to be repeated.
45. The method as defined in claim 34, wherein the training subject is related to:
(a) sales, or
(b) service, or
(c) how to relate in a personal situation, or
(d) academic knowledge,
(e) overcoming objections to a course of action, or
(f) proper procedures at a place of employment, or
any combination of (a), (b), (c), (d), (e), or (f).
46. The method as defined in claim 34, wherein one or more of the user interfaces are served to the user terminal by a remote server.
47. The method as defined in claim 34, wherein a complete response to the challenge includes a plurality of elements describing a product or service.
48. The system as defined in claim 1, wherein the programmatic code is further configured to provide a score related to how confidently the trainee verbalized the challenge response.
49. The system as defined in claim 1, wherein the programmatic code is further configured to calculate at least one score related to the audibly provided trainee challenge response and provide the calculated at least one score for display.
50. The system as defined in claim 1, wherein the programmatic code is further configured to calculate a cumulative score based on a plurality of differently weighted scores related to the audibly provided trainee challenge response and provide the calculated cumulative score.
51. The system as defined in claim 1, further comprising an application server, a scenario database, and a user database.
52. The method as defined in claim 11, wherein the at least one score is related to how confidently the trainee verbalized the challenge response.
53. The system as defined in claim 21, wherein the programmatic code is further configured to store and provide a score related to the audibly verbalized user response, wherein the challenge score is related to how confidently the user verbalized the challenge response.
54. The system as defined in claim 21, wherein the programmatic code is further configured to calculate at least one score related to the audibly provided user challenge response and provide the calculated at least one score for display.
55. The system as defined in claim 21, wherein the programmatic code is further configured to calculate a cumulative score based on a plurality of differently weighted scores related to the audibly provided user challenge response and provide the calculated cumulative score for display.
56. The system as defined in claim 21, further comprising an application server, a scenario database, and a user database.
57. The method as defined in claim 34, further comprising storing and providing a score related to the audibly verbalized user response, wherein the score is related to how confidently the user verbalized the challenge response.
US11/669,079 2007-01-30 2007-01-30 Systems and methods for computerized interactive skill training Active 2030-01-02 US8571463B2 (en)

Priority Applications (11)

Application Number Priority Date Filing Date Title
US11/669,079 US8571463B2 (en) 2007-01-30 2007-01-30 Systems and methods for computerized interactive skill training
CA002676137A CA2676137A1 (en) 2007-01-30 2008-01-10 Systems and methods for computerized interactive skill training
MX2009008131A MX2009008131A (en) 2007-01-30 2008-01-10 Systems and methods for computerized interactive skill training.
SG2012003273A SG177988A1 (en) 2007-01-30 2008-01-10 Systems and methods for computerized interactive skill training
EP08727554.1A EP2118874A4 (en) 2007-01-30 2008-01-10 Systems and methods for computerized interactive skill training
BRPI0807176-4A BRPI0807176A2 (en) 2007-01-30 2008-01-10 SYSTEMS AND METHODS OF PROVIDING TRAINING BY USING COMPUTER SYSTEM
JP2009547348A JP2010517098A (en) 2007-01-30 2008-01-10 System and method for computerized interactive technology training
PCT/US2008/050806 WO2008094736A2 (en) 2007-01-30 2008-01-10 Systems and methods for computerized interactive skill training
AU2008210903A AU2008210903B2 (en) 2007-01-30 2008-01-10 Systems and methods for computerized interactive skill training
US14/056,763 US9633572B2 (en) 2007-01-30 2013-10-17 Systems and methods for computerized interactive skill training
US15/492,879 US10152897B2 (en) 2007-01-30 2017-04-20 Systems and methods for computerized interactive skill training

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US11/669,079 US8571463B2 (en) 2007-01-30 2007-01-30 Systems and methods for computerized interactive skill training

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US14/056,763 Continuation US9633572B2 (en) 2007-01-30 2013-10-17 Systems and methods for computerized interactive skill training

Publications (2)

Publication Number Publication Date
US20080182231A1 US20080182231A1 (en) 2008-07-31
US8571463B2 true US8571463B2 (en) 2013-10-29

Family

ID=39668407

Family Applications (3)

Application Number Title Priority Date Filing Date
US11/669,079 Active 2030-01-02 US8571463B2 (en) 2007-01-30 2007-01-30 Systems and methods for computerized interactive skill training
US14/056,763 Active 2027-10-19 US9633572B2 (en) 2007-01-30 2013-10-17 Systems and methods for computerized interactive skill training
US15/492,879 Active US10152897B2 (en) 2007-01-30 2017-04-20 Systems and methods for computerized interactive skill training

Family Applications After (2)

Application Number Title Priority Date Filing Date
US14/056,763 Active 2027-10-19 US9633572B2 (en) 2007-01-30 2013-10-17 Systems and methods for computerized interactive skill training
US15/492,879 Active US10152897B2 (en) 2007-01-30 2017-04-20 Systems and methods for computerized interactive skill training

Country Status (9)

Country Link
US (3) US8571463B2 (en)
EP (1) EP2118874A4 (en)
JP (1) JP2010517098A (en)
AU (1) AU2008210903B2 (en)
BR (1) BRPI0807176A2 (en)
CA (1) CA2676137A1 (en)
MX (1) MX2009008131A (en)
SG (1) SG177988A1 (en)
WO (1) WO2008094736A2 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130203026A1 (en) * 2012-02-08 2013-08-08 Jpmorgan Chase Bank, Na System and Method for Virtual Training Environment
US20140149496A1 (en) * 2012-10-09 2014-05-29 School Yourself, Inc. System and Method for Recording and Playback of Interactions on a Computing Device
US10127831B2 (en) 2008-07-28 2018-11-13 Breakthrough Performancetech, Llc Systems and methods for computerized interactive skill training
US10694037B2 (en) 2018-03-28 2020-06-23 Nice Ltd. System and method for automatically validating agent implementation of training material
US10715713B2 (en) 2018-04-30 2020-07-14 Breakthrough Performancetech, Llc Interactive application adapted for use by multiple users via a distributed computer-based system
US20220278975A1 (en) * 2020-06-29 2022-09-01 Capital One Services, Llc Systems and methods for determining knowledge-based authentication questions

Families Citing this family (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006081544A2 (en) * 2005-01-28 2006-08-03 Breakthrough Performance Technologies, Llc Systems and methods for computerized interactive training
US8571463B2 (en) 2007-01-30 2013-10-29 Breakthrough Performancetech, Llc Systems and methods for computerized interactive skill training
US8696364B2 (en) * 2007-03-28 2014-04-15 Breakthrough Performancetech, Llc Systems and methods for computerized interactive training
US8684926B2 (en) * 2008-02-25 2014-04-01 Ideal Innovations Incorporated System and method for knowledge verification utilizing biopotentials and physiologic metrics
US20090313540A1 (en) * 2008-06-14 2009-12-17 Mark Otuteye Methods and systems for automated text evaluation
US20110143323A1 (en) * 2009-12-14 2011-06-16 Cohen Robert A Language training method and system
US20110189646A1 (en) * 2010-02-01 2011-08-04 Amos Benninga Pedagogical system method and apparatus
US20110196716A1 (en) * 2010-02-10 2011-08-11 Microsoft Corporation Lead qualification based on contact relationships and customer experience
US20120221380A1 (en) * 2011-02-28 2012-08-30 Bank Of America Corporation Teller Readiness Simulation
US20120237915A1 (en) * 2011-03-16 2012-09-20 Eric Krohner System and method for assessment testing
US8887047B2 (en) 2011-06-24 2014-11-11 Breakthrough Performancetech, Llc Methods and systems for dynamically generating a training program
US20160005332A1 (en) * 2013-02-17 2016-01-07 Michal CALE Method for administering a driving test
US20150056594A1 (en) * 2013-08-21 2015-02-26 David Andrew Blake Systems and methods for measuring educational inputs
US10238310B2 (en) 2013-12-16 2019-03-26 Ideal Innovations Incorporated Knowledge discovery based on brainwave response to external stimulation
US11666267B2 (en) 2013-12-16 2023-06-06 Ideal Innovations Inc. Knowledge, interest and experience discovery by psychophysiologic response to external stimulation
JP6239558B2 (en) * 2015-06-22 2017-11-29 任天堂株式会社 Information processing system, information processing apparatus, program, and information processing apparatus control method
CN105095085B (en) * 2015-08-25 2018-01-19 暨南大学 A kind of software test experience system and method based on WEB
US20180061274A1 (en) * 2016-08-27 2018-03-01 Gereon Frahling Systems and methods for generating and delivering training scenarios
CN106981228A (en) * 2017-02-13 2017-07-25 上海大学 A kind of interactive IT technical ability online education method
KR102340446B1 (en) * 2017-09-08 2021-12-21 삼성전자주식회사 Storage device and data training method thereof
CN109256003A (en) * 2018-10-25 2019-01-22 国网上海市电力公司 A kind of power transformation O&M stimulating and training system
CN110221024A (en) * 2019-04-18 2019-09-10 天津科技大学 A kind of simulated environment detection system
CN111899588A (en) * 2020-08-18 2020-11-06 山东工业职业学院 Novel computer accounting teaching simulation system
WO2022102432A1 (en) * 2020-11-13 2022-05-19 ソニーグループ株式会社 Information processing device and information processing method
CN113436487A (en) * 2021-07-08 2021-09-24 上海松鼠课堂人工智能科技有限公司 Chinese reciting skill training method and system based on virtual reality scene

Citations (74)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4015344A (en) * 1972-02-29 1977-04-05 Herbert Michaels Audio visual teaching method and apparatus
US4459114A (en) 1982-10-25 1984-07-10 Barwick John H Simulation system trainer
US4493655A (en) * 1983-08-05 1985-01-15 Groff James W Radio-controlled teaching device
US4608601A (en) * 1982-07-12 1986-08-26 The Moving Picture Company Inc. Video response testing apparatus
US4643682A (en) * 1985-05-13 1987-02-17 Bernard Migler Teaching machine
US4689022A (en) * 1984-04-30 1987-08-25 John Peers System for control of a video storage means by a programmed processor
US4745468A (en) * 1986-03-10 1988-05-17 Kohorn H Von System for evaluation and recording of responses to broadcast transmissions
US5006987A (en) * 1986-03-25 1991-04-09 Harless William G Audiovisual system for simulation of an interaction between persons through output of stored dramatic scenes in response to user vocal input
US5056792A (en) 1989-02-07 1991-10-15 Helweg Larsen Brian Business education model
US5147205A (en) * 1988-01-29 1992-09-15 Gross Theodore D Tachistoscope and method of use thereof for teaching, particularly of reading and spelling
GB2271262A (en) * 1992-10-05 1994-04-06 Sajjad Muzaffar Apparatus for playing a spot-the-ball competition
US5533110A (en) 1994-11-29 1996-07-02 Mitel Corporation Human machine interface for telephone feature invocation
US5722418A (en) * 1993-08-30 1998-03-03 Bro; L. William Method for mediating social and behavioral processes in medicine and business through an interactive telecommunications guidance system
US5980429A (en) 1997-03-12 1999-11-09 Neurocom International, Inc. System and method for monitoring training programs
US6067638A (en) 1998-04-22 2000-05-23 Scientific Learning Corp. Simulated play of interactive multimedia applications for error detection
US6106298A (en) 1996-10-28 2000-08-22 Lockheed Martin Corporation Reconfigurable easily deployable simulator
US6113645A (en) 1998-04-22 2000-09-05 Scientific Learning Corp. Simulated play of interactive multimedia applications for error detection
US6125356A (en) 1996-01-18 2000-09-26 Rosefaire Development, Ltd. Portable sales presentation system with selective scripted seller prompts
JP2000330464A (en) 1999-05-21 2000-11-30 Umi Ogawa Memory training device
US6155834A (en) 1997-06-27 2000-12-05 New, Iii; Cecil A. Data driven interactive testing method, apparatus and article of manufacture for teaching a student to read
US6171112B1 (en) * 1998-09-18 2001-01-09 Wyngate, Inc. Methods and apparatus for authenticating informed consent
US6236955B1 (en) 1998-07-31 2001-05-22 Gary J. Summers Management training simulation method and system
US6296487B1 (en) 1999-06-14 2001-10-02 Ernest L. Lotecka Method and system for facilitating communicating and behavior skills training
US6319130B1 (en) * 1998-01-30 2001-11-20 Konami Co., Ltd. Character display controlling device, display controlling method, and recording medium
JP2002072843A (en) 2000-08-28 2002-03-12 Hideki Sakai Simple video recording type video teaching material for study
US20020059376A1 (en) 2000-06-02 2002-05-16 Darren Schwartz Method and system for interactive communication skill training
US6409514B1 (en) 1997-10-16 2002-06-25 Micron Electronics, Inc. Method and apparatus for managing training activities
US20020119434A1 (en) 1999-05-05 2002-08-29 Beams Brian R. System method and article of manufacture for creating chat rooms with multiple roles for multiple participants
US6516300B1 (en) * 1992-05-19 2003-02-04 Informedical, Inc. Computer accessible methods for establishing certifiable informed consent for a procedure
US6514079B1 (en) 2000-03-27 2003-02-04 Rume Interactive Interactive training method for demonstrating and teaching occupational skills
US6535713B1 (en) 1996-05-09 2003-03-18 Verizon Services Corp. Interactive training application
US20030059750A1 (en) 2000-04-06 2003-03-27 Bindler Paul R. Automated and intelligent networked-based psychological services
US20030065524A1 (en) 2001-10-01 2003-04-03 Daniela Giacchetti Virtual beauty consultant
US6589055B2 (en) * 2001-02-07 2003-07-08 American Association Of Airport Executives Interactive employee training system and method
US20030127105A1 (en) 2002-01-05 2003-07-10 Fontana Richard Remo Complete compact
US20030180699A1 (en) 2002-02-26 2003-09-25 Resor Charles P. Electronic learning aid for teaching arithmetic skills
US20040014016A1 (en) 2001-07-11 2004-01-22 Howard Popeck Evaluation and assessment system
US6684027B1 (en) * 1999-08-19 2004-01-27 Joan I. Rosenberg Method and system for recording data for the purposes of performance related skill development
US20040018477A1 (en) 1998-11-25 2004-01-29 Olsen Dale E. Apparatus and method for training using a human interaction simulator
US20040043362A1 (en) 2002-08-29 2004-03-04 Aughenbaugh Robert S. Re-configurable e-learning activity and method of making
JP2004089601A (en) 2002-09-04 2004-03-25 Aruze Corp Game server and program
US6722888B1 (en) 1995-01-20 2004-04-20 Vincent J. Macri Method and apparatus for tutorial, self and assisted instruction directed to simulated preparation, training and competitive play and entertainment
US6736642B2 (en) 1999-08-31 2004-05-18 Indeliq, Inc. Computer enabled training of a user to validate assumptions
US6755659B2 (en) 2001-07-05 2004-06-29 Access Technologies Group, Inc. Interactive training system and method
US20040166484A1 (en) 2002-12-20 2004-08-26 Mark Alan Budke System and method for simulating training scenarios
JP2004240234A (en) 2003-02-07 2004-08-26 Nippon Hoso Kyokai <Nhk> Server, system, method and program for character string correction training
US20050003330A1 (en) * 2003-07-02 2005-01-06 Mehdi Asgarinejad Interactive virtual classroom
US20050004789A1 (en) 1998-07-31 2005-01-06 Summers Gary J. Management training simulation method and system
US20050026131A1 (en) 2003-07-31 2005-02-03 Elzinga C. Bret Systems and methods for providing a dynamic continual improvement educational environment
US20050054444A1 (en) * 2002-08-20 2005-03-10 Aruze Corp. Game server and program
US20050089834A1 (en) * 2003-10-23 2005-04-28 Shapiro Jeffrey S. Educational computer program
US6909874B2 (en) 2000-04-12 2005-06-21 Thomson Licensing Sa. Interactive tutorial method, system, and computer program product for real time media production
US6913466B2 (en) * 2001-08-21 2005-07-05 Microsoft Corporation System and methods for training a trainee to classify fundamental properties of media entities
US6925601B2 (en) 2002-08-28 2005-08-02 Kelly Properties, Inc. Adaptive testing and training tool
US20050170326A1 (en) 2002-02-21 2005-08-04 Sbc Properties, L.P. Interactive dialog-based training method
US6944586B1 (en) * 1999-11-09 2005-09-13 Interactive Drama, Inc. Interactive simulated dialogue system and method for a computer network
US6976846B2 (en) 2002-05-08 2005-12-20 Accenture Global Services Gmbh Telecommunications virtual simulator
US6988239B2 (en) 2001-12-19 2006-01-17 Ge Mortgage Holdings, Llc Methods and apparatus for preparation and administration of training courses
US20060048064A1 (en) * 2004-08-31 2006-03-02 Microsoft Corporation Ambient display of data in a user interface
US7016949B1 (en) 2000-11-20 2006-03-21 Colorado Computer Training Institute Network training system with a remote, shared classroom laboratory
US20060074689A1 (en) 2002-05-16 2006-04-06 At&T Corp. System and method of providing conversational visual prosody for talking heads
US20060078863A1 (en) 2001-02-09 2006-04-13 Grow.Net, Inc. System and method for processing test reports
US20060154225A1 (en) 2005-01-07 2006-07-13 Kim Stanley A Test preparation device
US20060172275A1 (en) * 2005-01-28 2006-08-03 Cohen Martin L Systems and methods for computerized interactive training
US20060177808A1 (en) 2003-07-24 2006-08-10 Csk Holdings Corporation Apparatus for ability evaluation, method of evaluating ability, and computer program product for ability evaluation
US20060204943A1 (en) 2005-03-10 2006-09-14 Qbinternational VOIP e-learning system
US20070015121A1 (en) 2005-06-02 2007-01-18 University Of Southern California Interactive Foreign Language Teaching
US7221899B2 (en) * 2003-01-30 2007-05-22 Mitsubishi Denki Kabushiki Kaisha Customer support system
US20070188502A1 (en) 2006-02-09 2007-08-16 Bishop Wendell E Smooth morphing between personal video calling avatars
US20070245305A1 (en) 2005-10-28 2007-10-18 Anderson Jonathan B Learning content mentoring system, electronic program, and method of use
US20070245505A1 (en) 2004-02-13 2007-10-25 Abfall Tony J Disc Cleaner
US7367808B1 (en) * 2002-09-10 2008-05-06 Talentkeepers, Inc. Employee retention system and associated methods
US20080254426A1 (en) * 2007-03-28 2008-10-16 Cohen Martin L Systems and methods for computerized interactive training
US20100028846A1 (en) 2008-07-28 2010-02-04 Breakthrough Performance Tech, Llc Systems and methods for computerized interactive skill training

Family Cites Families (71)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3981087A (en) * 1973-10-11 1976-09-21 Sachs Thomas D Teaching machine
US3939579A (en) * 1973-12-28 1976-02-24 International Business Machines Corporation Interactive audio-visual instruction device
US4569026A (en) * 1979-02-05 1986-02-04 Best Robert M TV Movies that talk back
US4333152A (en) * 1979-02-05 1982-06-01 Best Robert M TV Movies that talk back
US4305131A (en) * 1979-02-05 1981-12-08 Best Robert M Dialog between TV movies and human viewers
US4445187A (en) * 1979-02-05 1984-04-24 Best Robert M Video games with voice dialog
US4812125A (en) 1985-05-29 1989-03-14 Sony Corporation Interactive teaching apparatus
AU652209B2 (en) * 1990-11-14 1994-08-18 Robert Macandrew Best Talking video games
US5393070A (en) * 1990-11-14 1995-02-28 Best; Robert M. Talking video games with parallel montage
US5393071A (en) * 1990-11-14 1995-02-28 Best; Robert M. Talking video games with cooperative action
US5393072A (en) * 1990-11-14 1995-02-28 Best; Robert M. Talking video games with vocal conflict
US5393073A (en) * 1990-11-14 1995-02-28 Best; Robert M. Talking video games
US5597312A (en) 1994-05-04 1997-01-28 U S West Technologies, Inc. Intelligent tutoring method and system
US5734794A (en) 1995-06-22 1998-03-31 White; Tom H. Method and system for voice-activated cell animation
GB9619165D0 (en) * 1996-09-13 1996-10-23 British Telecomm Training apparatus and method
US6470386B1 (en) * 1997-09-26 2002-10-22 Worldcom, Inc. Integrated proxy interface for web based telecommunications management tools
AUPP615898A0 (en) 1998-09-24 1998-10-15 Lewis Cadman Consulting Pty Ltd An apparatus for conducting a test
US7149690B2 (en) 1999-09-09 2006-12-12 Lucent Technologies Inc. Method and apparatus for interactive language instruction
US7725307B2 (en) * 1999-11-12 2010-05-25 Phoenix Solutions, Inc. Query engine for processing voice based queries including semantic decoding
US6507353B1 (en) 1999-12-10 2003-01-14 Godot Huard Influencing virtual actors in an interactive environment
US6826540B1 (en) 1999-12-29 2004-11-30 Virtual Personalities, Inc. Virtual human interface for conducting surveys
US6470170B1 (en) 2000-05-18 2002-10-22 Hai Xing Chen System and method for interactive distance learning and examination training
US6537076B2 (en) * 2001-02-16 2003-03-25 Golftec Enterprises Llc Method and system for presenting information for physical motion analysis
US6866515B2 (en) * 2001-03-02 2005-03-15 Bryan Cave Llp Method for providing business conduct training
US20030091970A1 (en) 2001-11-09 2003-05-15 Altsim, Inc. And University Of Southern California Method and apparatus for advanced leadership training simulation
US20050015268A1 (en) * 2001-12-15 2005-01-20 Ramon Diaz Method and apparatus for delivering building safety information
US6999930B1 (en) * 2002-03-27 2006-02-14 Extended Systems, Inc. Voice dialog server method and system
US7401295B2 (en) 2002-08-15 2008-07-15 Simulearn, Inc. Computer-based learning system
US8458028B2 (en) 2002-10-16 2013-06-04 Barbaro Technologies System and method for integrating business-related content into an electronic game
US20040210661A1 (en) * 2003-01-14 2004-10-21 Thompson Mark Gregory Systems and methods of profiling, matching and optimizing performance of large networks of individuals
US20040186743A1 (en) 2003-01-27 2004-09-23 Angel Cordero System, method and software for individuals to experience an interview simulation and to develop career and interview skills
US20050089828A1 (en) * 2003-10-02 2005-04-28 Ahmad Ayaz Language instruction methodologies
US20050175970A1 (en) * 2004-02-05 2005-08-11 David Dunlap Method and system for interactive teaching and practicing of language listening and speaking skills
US20050255430A1 (en) 2004-04-29 2005-11-17 Robert Kalinowski Speech instruction method and apparatus
US7373604B1 (en) * 2004-05-28 2008-05-13 Adobe Systems Incorporated Automatically scalable presentation of video demonstration content
WO2005122145A1 (en) * 2004-06-08 2005-12-22 Metaphor Solutions, Inc. Speech recognition dialog management
US8328559B2 (en) 2004-12-30 2012-12-11 Accenture Global Services Limited Development of training and educational experiences
US8317518B2 (en) * 2005-01-28 2012-11-27 University Of Maryland, Baltimore Techniques for implementing virtual persons in a system to train medical personnel
US20060223043A1 (en) 2005-04-01 2006-10-05 Dancy-Edwards Glynda P Method of providing and administering a web-based personal financial management course
US8170466B2 (en) 2005-05-27 2012-05-01 Ctb/Mcgraw-Hill System and method for automated assessment of constrained constructed responses
US20070117070A1 (en) * 2005-11-23 2007-05-24 Alvin Krass Vocational assessment testing device and method
WO2007109237A2 (en) 2006-03-20 2007-09-27 Jesse Schell Controlling an interactive story through manipulation of simulated character mental state
WO2008067413A2 (en) 2006-11-28 2008-06-05 Attune Interactive, Inc. Training system using an interactive prompt character
US8113844B2 (en) * 2006-12-15 2012-02-14 Atellis, Inc. Method, system, and computer-readable recording medium for synchronous multi-media recording and playback with end user control of time, data, and event visualization for playback control over a network
US20080145829A1 (en) * 2006-12-15 2008-06-19 Atellis, Inc. Synchronous multi-media recording and playback with end user control of time, data, and event visualization for playback control over a network
US20080160488A1 (en) 2006-12-28 2008-07-03 Medical Simulation Corporation Trainee-as-mentor education and training system and method
US8504926B2 (en) * 2007-01-17 2013-08-06 Lupus Labs Ug Model based avatars for virtual presence
US8571463B2 (en) * 2007-01-30 2013-10-29 Breakthrough Performancetech, Llc Systems and methods for computerized interactive skill training
US8001469B2 (en) * 2007-11-07 2011-08-16 Robert Bosch Gmbh Automatic generation of interactive systems from a formalized description language
US8537196B2 (en) 2008-10-06 2013-09-17 Microsoft Corporation Multi-device capture and spatial browsing of conferences
US8943394B2 (en) * 2008-11-19 2015-01-27 Robert Bosch Gmbh System and method for interacting with live agents in an automated call center
GB0822809D0 (en) * 2008-12-15 2009-01-21 Materialise Dental Nv Method for multi-person and multi-site interactive treatment simulation
WO2010127236A1 (en) * 2009-05-01 2010-11-04 Radoje Drmanac Systems, computer readable program products, and computer implemented methods to facilitate on-demand, user-driven, virtual sponsoring sessions for one or more user-selected topics through user-designed virtual sponsors
WO2011005973A2 (en) * 2009-07-08 2011-01-13 The University Of Memphis Research Foundation Methods and computer-program products for teaching a topic to a user
WO2011061758A1 (en) 2009-11-18 2011-05-26 Kumar Gl Umesh Assessment for efficient learning and top performance in competitive exams - system, method, user interface- and a computer application
US20110172873A1 (en) * 2010-01-08 2011-07-14 Ford Global Technologies, Llc Emotive advisory system vehicle maintenance advisor
US20110223574A1 (en) * 2010-03-15 2011-09-15 Crawford Benjamin F Directed Collaboration Platform for Online Virtual Coaching and Training
US20110282662A1 (en) * 2010-05-11 2011-11-17 Seiko Epson Corporation Customer Service Data Recording Device, Customer Service Data Recording Method, and Recording Medium
US20130260346A1 (en) * 2010-08-20 2013-10-03 Smarty Ants Inc. Interactive learning method, apparatus, and system
US10388178B2 (en) * 2010-08-27 2019-08-20 Arthur Carl Graesser Affect-sensitive intelligent tutoring system
AU2011316720A1 (en) * 2010-10-11 2013-05-23 Teachscape, Inc. Methods and systems for capturing, processing, managing and/or evaluating multimedia content of observed persons performing a task
US20120115114A1 (en) 2010-11-10 2012-05-10 Donal Daly Apparatus and Methods for Coaching Salespersons
US8887047B2 (en) 2011-06-24 2014-11-11 Breakthrough Performancetech, Llc Methods and systems for dynamically generating a training program
US20130252224A1 (en) 2012-03-21 2013-09-26 Charles J. Smith Method and System for Knowledge Assessment And Learning
US8924327B2 (en) * 2012-06-28 2014-12-30 Nokia Corporation Method and apparatus for providing rapport management
US20140113263A1 (en) * 2012-10-20 2014-04-24 The University Of Maryland, Baltimore County Clinical Training and Advice Based on Cognitive Agent with Psychological Profile
US8894417B2 (en) 2013-02-05 2014-11-25 Ayla Mandel Guiding a child to perform tasks
US9601026B1 (en) 2013-03-07 2017-03-21 Posit Science Corporation Neuroplasticity games for depression
US9691296B2 (en) 2013-06-03 2017-06-27 Massachusetts Institute Of Technology Methods and apparatus for conversation coach
US20150312520A1 (en) * 2014-04-23 2015-10-29 President And Fellows Of Harvard College Telepresence apparatus and method enabling a case-study approach to lecturing and teaching
US10762463B2 (en) * 2014-08-28 2020-09-01 Nicolas Bissantz Electronic boss

Patent Citations (84)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4015344A (en) * 1972-02-29 1977-04-05 Herbert Michaels Audio visual teaching method and apparatus
US4608601A (en) * 1982-07-12 1986-08-26 The Moving Picture Company Inc. Video response testing apparatus
US4459114A (en) 1982-10-25 1984-07-10 Barwick John H Simulation system trainer
WO1985005715A1 (en) 1982-10-25 1985-12-19 Barwick John H Simulation system trainer
US4493655A (en) * 1983-08-05 1985-01-15 Groff James W Radio-controlled teaching device
US4689022A (en) * 1984-04-30 1987-08-25 John Peers System for control of a video storage means by a programmed processor
US4643682A (en) * 1985-05-13 1987-02-17 Bernard Migler Teaching machine
US4745468B1 (en) * 1986-03-10 1991-06-11 System for evaluation and recording of responses to broadcast transmissions
US4745468A (en) * 1986-03-10 1988-05-17 Kohorn H Von System for evaluation and recording of responses to broadcast transmissions
US5006987A (en) * 1986-03-25 1991-04-09 Harless William G Audiovisual system for simulation of an interaction between persons through output of stored dramatic scenes in response to user vocal input
US5147205A (en) * 1988-01-29 1992-09-15 Gross Theodore D Tachistoscope and method of use thereof for teaching, particularly of reading and spelling
US5056792A (en) 1989-02-07 1991-10-15 Helweg Larsen Brian Business education model
US6516300B1 (en) * 1992-05-19 2003-02-04 Informedical, Inc. Computer accessible methods for establishing certifiable informed consent for a procedure
GB2271262A (en) * 1992-10-05 1994-04-06 Sajjad Muzaffar Apparatus for playing a spot-the-ball competition
US5722418A (en) * 1993-08-30 1998-03-03 Bro; L. William Method for mediating social and behavioral processes in medicine and business through an interactive telecommunications guidance system
US5533110A (en) 1994-11-29 1996-07-02 Mitel Corporation Human machine interface for telephone feature invocation
US6966778B2 (en) 1995-01-20 2005-11-22 Vincent J. Macri Method and apparatus for tutorial, self and assisted instruction directed to simulated preparation, training and competitive play and entertainment
US6722888B1 (en) 1995-01-20 2004-04-20 Vincent J. Macri Method and apparatus for tutorial, self and assisted instruction directed to simulated preparation, training and competitive play and entertainment
US6125356A (en) 1996-01-18 2000-09-26 Rosefaire Development, Ltd. Portable sales presentation system with selective scripted seller prompts
US6535713B1 (en) 1996-05-09 2003-03-18 Verizon Services Corp. Interactive training application
US6106298A (en) 1996-10-28 2000-08-22 Lockheed Martin Corporation Reconfigurable easily deployable simulator
US6632158B1 (en) 1997-03-12 2003-10-14 Neurocom International, Inc. Monitoring of training programs
US5980429A (en) 1997-03-12 1999-11-09 Neurocom International, Inc. System and method for monitoring training programs
US6190287B1 (en) 1997-03-12 2001-02-20 Neurocom International, Inc. Method for monitoring training programs
US6155834A (en) 1997-06-27 2000-12-05 New, Iii; Cecil A. Data driven interactive testing method, apparatus and article of manufacture for teaching a student to read
US6409514B1 (en) 1997-10-16 2002-06-25 Micron Electronics, Inc. Method and apparatus for managing training activities
US6319130B1 (en) * 1998-01-30 2001-11-20 Konami Co., Ltd. Character display controlling device, display controlling method, and recording medium
US6067638A (en) 1998-04-22 2000-05-23 Scientific Learning Corp. Simulated play of interactive multimedia applications for error detection
US6113645A (en) 1998-04-22 2000-09-05 Scientific Learning Corp. Simulated play of interactive multimedia applications for error detection
US20050004789A1 (en) 1998-07-31 2005-01-06 Summers Gary J. Management training simulation method and system
US6236955B1 (en) 1998-07-31 2001-05-22 Gary J. Summers Management training simulation method and system
US6171112B1 (en) * 1998-09-18 2001-01-09 Wyngate, Inc. Methods and apparatus for authenticating informed consent
US20040018477A1 (en) 1998-11-25 2004-01-29 Olsen Dale E. Apparatus and method for training using a human interaction simulator
US20020119434A1 (en) 1999-05-05 2002-08-29 Beams Brian R. System method and article of manufacture for creating chat rooms with multiple roles for multiple participants
JP2000330464A (en) 1999-05-21 2000-11-30 Umi Ogawa Memory training device
US6296487B1 (en) 1999-06-14 2001-10-02 Ernest L. Lotecka Method and system for facilitating communicating and behavior skills training
US6684027B1 (en) * 1999-08-19 2004-01-27 Joan I. Rosenberg Method and system for recording data for the purposes of performance related skill development
US6736642B2 (en) 1999-08-31 2004-05-18 Indeliq, Inc. Computer enabled training of a user to validate assumptions
US6944586B1 (en) * 1999-11-09 2005-09-13 Interactive Drama, Inc. Interactive simulated dialogue system and method for a computer network
US6514079B1 (en) 2000-03-27 2003-02-04 Rume Interactive Interactive training method for demonstrating and teaching occupational skills
US20030059750A1 (en) 2000-04-06 2003-03-27 Bindler Paul R. Automated and intelligent networked-based psychological services
US6909874B2 (en) 2000-04-12 2005-06-21 Thomson Licensing Sa. Interactive tutorial method, system, and computer program product for real time media production
US20020059376A1 (en) 2000-06-02 2002-05-16 Darren Schwartz Method and system for interactive communication skill training
US6705869B2 (en) 2000-06-02 2004-03-16 Darren Schwartz Method and system for interactive communication skill training
JP2002072843A (en) 2000-08-28 2002-03-12 Hideki Sakai Simple video recording type video teaching material for study
US7016949B1 (en) 2000-11-20 2006-03-21 Colorado Computer Training Institute Network training system with a remote, shared classroom laboratory
US6589055B2 (en) * 2001-02-07 2003-07-08 American Association Of Airport Executives Interactive employee training system and method
US20060078863A1 (en) 2001-02-09 2006-04-13 Grow.Net, Inc. System and method for processing test reports
US6755659B2 (en) 2001-07-05 2004-06-29 Access Technologies Group, Inc. Interactive training system and method
US20040014016A1 (en) 2001-07-11 2004-01-22 Howard Popeck Evaluation and assessment system
US6913466B2 (en) * 2001-08-21 2005-07-05 Microsoft Corporation System and methods for training a trainee to classify fundamental properties of media entities
US20030065524A1 (en) 2001-10-01 2003-04-03 Daniela Giacchetti Virtual beauty consultant
US6988239B2 (en) 2001-12-19 2006-01-17 Ge Mortgage Holdings, Llc Methods and apparatus for preparation and administration of training courses
US20030127105A1 (en) 2002-01-05 2003-07-10 Fontana Richard Remo Complete compact
US20050170326A1 (en) 2002-02-21 2005-08-04 Sbc Properties, L.P. Interactive dialog-based training method
US20030180699A1 (en) 2002-02-26 2003-09-25 Resor Charles P. Electronic learning aid for teaching arithmetic skills
US6976846B2 (en) 2002-05-08 2005-12-20 Accenture Global Services Gmbh Telecommunications virtual simulator
US20060074689A1 (en) 2002-05-16 2006-04-06 At&T Corp. System and method of providing conversational visual prosody for talking heads
US20050054444A1 (en) * 2002-08-20 2005-03-10 Aruze Corp. Game server and program
US6925601B2 (en) 2002-08-28 2005-08-02 Kelly Properties, Inc. Adaptive testing and training tool
US20040043362A1 (en) 2002-08-29 2004-03-04 Aughenbaugh Robert S. Re-configurable e-learning activity and method of making
JP2004089601A (en) 2002-09-04 2004-03-25 Aruze Corp Game server and program
US7367808B1 (en) * 2002-09-10 2008-05-06 Talentkeepers, Inc. Employee retention system and associated methods
US20040166484A1 (en) 2002-12-20 2004-08-26 Mark Alan Budke System and method for simulating training scenarios
US7221899B2 (en) * 2003-01-30 2007-05-22 Mitsubishi Denki Kabushiki Kaisha Customer support system
JP2004240234A (en) 2003-02-07 2004-08-26 Nippon Hoso Kyokai <Nhk> Server, system, method and program for character string correction training
US20050003330A1 (en) * 2003-07-02 2005-01-06 Mehdi Asgarinejad Interactive virtual classroom
US20060177808A1 (en) 2003-07-24 2006-08-10 Csk Holdings Corporation Apparatus for ability evaluation, method of evaluating ability, and computer program product for ability evaluation
US20050026131A1 (en) 2003-07-31 2005-02-03 Elzinga C. Bret Systems and methods for providing a dynamic continual improvement educational environment
US20050089834A1 (en) * 2003-10-23 2005-04-28 Shapiro Jeffrey S. Educational computer program
US20070245505A1 (en) 2004-02-13 2007-10-25 Abfall Tony J Disc Cleaner
US20060048064A1 (en) * 2004-08-31 2006-03-02 Microsoft Corporation Ambient display of data in a user interface
US20060154225A1 (en) 2005-01-07 2006-07-13 Kim Stanley A Test preparation device
US20060172275A1 (en) * 2005-01-28 2006-08-03 Cohen Martin L Systems and methods for computerized interactive training
US20060204943A1 (en) 2005-03-10 2006-09-14 Qbinternational VOIP e-learning system
US20070015121A1 (en) 2005-06-02 2007-01-18 University Of Southern California Interactive Foreign Language Teaching
US20070245305A1 (en) 2005-10-28 2007-10-18 Anderson Jonathan B Learning content mentoring system, electronic program, and method of use
US20070188502A1 (en) 2006-02-09 2007-08-16 Bishop Wendell E Smooth morphing between personal video calling avatars
US20080254426A1 (en) * 2007-03-28 2008-10-16 Cohen Martin L Systems and methods for computerized interactive training
US20080254423A1 (en) * 2007-03-28 2008-10-16 Cohen Martin L Systems and methods for computerized interactive training
US20080254424A1 (en) * 2007-03-28 2008-10-16 Cohen Martin L Systems and methods for computerized interactive training
US20080254425A1 (en) * 2007-03-28 2008-10-16 Cohen Martin L Systems and methods for computerized interactive training
US20080254419A1 (en) * 2007-03-28 2008-10-16 Cohen Martin L Systems and methods for computerized interactive training
US20100028846A1 (en) 2008-07-28 2010-02-04 Breakthrough Performance Tech, Llc Systems and methods for computerized interactive skill training

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
English translation of Japanese Office Action regarding Japanese Patent Application No. 2007-553313, dated Mar. 12, 2012 and transmitted on Mar. 21, 2012.
International Search Report and Written Opinion; PCT/US08/58781, Filing date: Mar. 28, 2008; mailed Oct. 1, 2008.
PCT International preliminary report on patentability; PCT Application No. PCT/US2006/003174, dated: Mar. 31, 2009.
PCT International Search Report and Written Opinion dated Jul. 23, 2008, PCT Application No. PCT/US2006/003174.
PCT International Search Report and Written Opinion, PCT Application No. PCT/US2009/051994, dated Sep. 23, 2009.
PCT International Search Report and Written Opinion; PCT Application No. PCT/US08/58781, dated Oct. 1, 2008.
PCT International Search Report and Written Opinion; PCT/US 08/50806; International Filing Date: Jan. 10, 2008; Mailed Jul. 8, 2008.

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10127831B2 (en) 2008-07-28 2018-11-13 Breakthrough Performancetech, Llc Systems and methods for computerized interactive skill training
US11636406B2 (en) 2008-07-28 2023-04-25 Breakthrough Performancetech, Llc Systems and methods for computerized interactive skill training
US11227240B2 (en) 2008-07-28 2022-01-18 Breakthrough Performancetech, Llc Systems and methods for computerized interactive skill training
US20130203026A1 (en) * 2012-02-08 2013-08-08 Jpmorgan Chase Bank, Na System and Method for Virtual Training Environment
US20140149496A1 (en) * 2012-10-09 2014-05-29 School Yourself, Inc. System and Method for Recording and Playback of Interactions on a Computing Device
US9836983B2 (en) * 2012-10-09 2017-12-05 Amplify Education, Inc. System and method for recording and playback of interactions on a computing device
US10798243B2 (en) 2018-03-28 2020-10-06 Nice Ltd. System and method for automatically validating agent implementation of training material
US10868911B1 (en) 2018-03-28 2020-12-15 Nice Ltd. System and method for automatically validating agent implementation of training material
US10694037B2 (en) 2018-03-28 2020-06-23 Nice Ltd. System and method for automatically validating agent implementation of training material
US10715713B2 (en) 2018-04-30 2020-07-14 Breakthrough Performancetech, Llc Interactive application adapted for use by multiple users via a distributed computer-based system
US11463611B2 (en) 2018-04-30 2022-10-04 Breakthrough Performancetech, Llc Interactive application adapted for use by multiple users via a distributed computer-based system
US11871109B2 (en) 2018-04-30 2024-01-09 Breakthrough Performancetech, Llc Interactive application adapted for use by multiple users via a distributed computer-based system
US20220278975A1 (en) * 2020-06-29 2022-09-01 Capital One Services, Llc Systems and methods for determining knowledge-based authentication questions

Also Published As

Publication number Publication date
WO2008094736A3 (en) 2008-09-18
US10152897B2 (en) 2018-12-11
BRPI0807176A2 (en) 2014-05-27
WO2008094736A2 (en) 2008-08-07
AU2008210903A1 (en) 2008-08-07
AU2008210903B2 (en) 2013-08-22
US9633572B2 (en) 2017-04-25
MX2009008131A (en) 2009-09-09
US20080182231A1 (en) 2008-07-31
EP2118874A2 (en) 2009-11-18
CA2676137A1 (en) 2008-08-07
US20170221372A1 (en) 2017-08-03
US20140248593A1 (en) 2014-09-04
SG177988A1 (en) 2012-02-28
JP2010517098A (en) 2010-05-20
EP2118874A4 (en) 2014-12-17

Similar Documents

Publication Publication Date Title
US10152897B2 (en) Systems and methods for computerized interactive skill training
US11636406B2 (en) Systems and methods for computerized interactive skill training
JP6181559B2 (en) Systems and methods for adaptive knowledge assessment and learning
JP6606750B2 (en) E-learning system
US20190385471A1 (en) Assessment-based assignment of remediation and enhancement activities
WO2009023802A1 (en) Methods, systems, and media for computer-based learning
JPH10207335A (en) Interactive learning system having previous test
US20120129141A1 (en) e-Learning System
US20090081623A1 (en) Instructional and computerized spelling systems, methods and interfaces
TWI534767B (en) Learning method of assessment
JP3742014B2 (en) Learning content acquisition confirmation system
Brus Applying E-learning and persuasive design: teaching new users of an online accounting tool the basics of online bookkeeping
KR20010016638A (en) Cyber intelligence study methode using Fuzzy theory
KR101121206B1 (en) System for providing service of vocabulardy learning in on-line
KR20090015556A (en) Internet educational method and system for question-answer between teach.er and student

Legal Events

Date Code Title Description
AS Assignment

Owner name: BREAKTHROUGH PERFORMANCE TECHNOLOGIES, L.L.C., CAL

Free format text: CHANGE OF NAME;ASSIGNOR:ADVANCED LISTENING TECHNOLOGIES, LLC;REEL/FRAME:019503/0432

Effective date: 20070124

AS Assignment

Owner name: BREAKTHROUGH PERFORMANCE TECH, LLC, CALIFORNIA

Free format text: CHANGE OF NAME;ASSIGNOR:BREAKTHROUGH PERFORMANCE TECHNOLOGY, LLC;REEL/FRAME:021191/0184

Effective date: 20080428

Owner name: BREAKTHROUGH PERFORMANCE TECH, LLC, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:COHEN, MARTIN L.;BROWN, EDWARD G.;REEL/FRAME:021191/0187

Effective date: 20080623

Owner name: BREAKTHROUGH PERFORMANCETECH, LLC, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:COHEN, MARTIN L.;BROWN, EDWARD G.;REEL/FRAME:021191/0187

Effective date: 20080623

Owner name: BREAKTHROUGH PERFORMANCETECH, LLC, CALIFORNIA

Free format text: CHANGE OF NAME;ASSIGNOR:BREAKTHROUGH PERFORMANCE TECHNOLOGIES, LLC;REEL/FRAME:021191/0184

Effective date: 20080428

STCF Information on status: patent grant

Free format text: PATENTED CASE

CC Certificate of correction
FPAY Fee payment

Year of fee payment: 4

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YR, SMALL ENTITY (ORIGINAL EVENT CODE: M2552); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

Year of fee payment: 8