US20070288406A1 - Method for determining relationships through use of an ordered list between processing nodes in an emulated human brain - Google Patents

Method for determining relationships through use of an ordered list between processing nodes in an emulated human brain Download PDF

Info

Publication number
US20070288406A1
US20070288406A1 US11/758,667 US75866707A US2007288406A1 US 20070288406 A1 US20070288406 A1 US 20070288406A1 US 75866707 A US75866707 A US 75866707A US 2007288406 A1 US2007288406 A1 US 2007288406A1
Authority
US
United States
Prior art keywords
neuron
temperament
analyzer
brain
parameters
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US11/758,667
Other versions
US7925492B2 (en
Inventor
Thomas Visel
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Neuric Tech LLC
Original Assignee
Neuric Tech LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US11/154,313 external-priority patent/US7089218B1/en
Priority claimed from US11/670,959 external-priority patent/US20070156625A1/en
Priority claimed from US11/697,721 external-priority patent/US8001067B2/en
Priority to US11/758,667 priority Critical patent/US7925492B2/en
Application filed by Neuric Tech LLC filed Critical Neuric Tech LLC
Assigned to NEURIC TECHNOLOGIES, L.L.C. reassignment NEURIC TECHNOLOGIES, L.L.C. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: VISEL, THOMAS A.
Publication of US20070288406A1 publication Critical patent/US20070288406A1/en
Priority to US12/101,904 priority patent/US8725493B2/en
Priority to US12/136,670 priority patent/US20080243741A1/en
Priority to US12/645,317 priority patent/US8473449B2/en
Publication of US7925492B2 publication Critical patent/US7925492B2/en
Priority to US13/085,289 priority patent/US9064211B2/en
Application granted granted Critical
Priority to US13/926,860 priority patent/US20140012798A1/en
Priority to US14/273,143 priority patent/US20140303963A1/en
Expired - Fee Related legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/205Parsing
    • G06F40/211Syntactic parsing, e.g. based on context-free grammar [CFG] or unification grammars
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/10Interfaces, programming languages or software development kits, e.g. for simulating neural networks

Definitions

  • 60/534,641 entitled A NEURIC BRAIN MODELING SYSTEM IN THE MILITARY ENVIRONMENT
  • U.S. Provisional Application for Patent Ser. No. 60/534,492 entitled METHOD FOR INCLUSION OF PSYCHOLOGICAL TEMPERAMENT IN AN ELECTRONIC EMULATION OF THE HUMAN BRAIN
  • U.S. Provisional Application for Patent Ser. No. 60/534,659 entitled DESIGN OF THE NEURIC BRAIN, all filed Jan. 6, 2004, now expired
  • U.S. Provisional Application for Patent Ser. No. 60/764,442 filed Feb. 2, 2006, and entitled USE OF THE NEURIC BRAIN MODEL IN MOVIE ANIMATION (Atty. Dkt. No.
  • the present invention pertains in general to artificial intelligence and, more particularly, to emulating a human.
  • a human is required in order to assess the conditions surrounding the operation of particular system or the execution of a certain task and to determine the progress of the task or covered conditions in the system have changed.
  • a user based upon that user's experience and information, there can be some type of action taken. For example, it might be that a user would make a change to the system in order to maintain the system a in a particular operating range, this being the task of that individual. Further, it might be that the user is tasked to achieve certain results with the system. In this situation, the user would take certain actions, monitor the operations, i.e., the surrounding environment, and then take additional actions if necessary or to ensure that the task are achieved.
  • an expert system is supposed to operate a particular system to take the place of the human. These expert systems are trained based upon an expert human wherein that human is presented with certain conditions and the actions are recorded. Whenever those conditions exist with respect to the expert system, the expert system will take the same actions. However, if conditions exist that were not part of the training dataset for the expert system, the results would be questionable, as there is no basis upon which to make such a decision by the expert system.
  • the present invention disclosed and claimed herein comprises a method for emulating human cognition in electronic form.
  • Information is received in the form of a textual or voice input in a natural language.
  • This is parsed into pre-determined phrases based on a stored set of language rules for the natural language. Then, the parsed phrases are determined as to whether they define aspects of an environment and, if so, then creating weighting factors to the natural language that are adaptive, the created weighting factors operable to create a weighted decision based upon the natural language. Then it is determined if the parsed phrases constitute a query and, if so, then using the weighted factors to make a decision to the query.
  • FIG. 1 illustrates a diagrammatic block diagram of the overall animation system
  • FIG. 2 a and 2 b illustrate a diagrammatic view of an animation sequence
  • FIG. 3 illustrates Influence Inclusion—An example of weighted random influence
  • FIG. 4 illustrates Implementation of the Brain Emulation—Block diagram of brain emulation
  • FIG. 5 illustrates Language Grammar Sample—Example of natural language grammar description
  • FIG. 6 illustrates Example Parser Diagnostic Trace—Example trace of grammar parsing
  • FIG. 7 illustrates Example Relationals Between Neurons
  • FIG. 8 illustrates Organization of Neuron Tables—General organization of neuron memory lists
  • FIG. 9 illustrates Table of Neurons—Internal organization of a neuron
  • FIG. 10 illustrates Example Relational Record—Contents of inter-neuron relationship record
  • FIG. 11 illustrates Event Queue and Memory—Organization of the event processor
  • FIG. 12 illustrates Content of an Event—General internal contents of an event record
  • FIG. 13 illustrates A Deference Table—Example table of orders of deference
  • FIG. 14 illustrates The Layered-Temperament Personality
  • FIG. 15 illustrates Characteristic Traits of the Temperaments
  • FIGS. 16 A-D illustrate The Four Composite Temperament Models
  • FIG. 17 illustrates Typical Temperament—Weighting of Parameters
  • FIG. 18 illustrates Implementation of Pressure or Trauma
  • FIG. 19 illustrates Network-Connected Brain Emulation
  • FIG. 20 illustrates Example Battleforce Simulation Cluster
  • FIG. 21 illustrates Example Integrated Battleforce Simulation System
  • FIG. 22 illustrates sample relational connections
  • FIG. 23 illustrates implied relationals in linkages
  • FIG. 24 illustrates the “not” relationships
  • FIG. 25 illustrates a diagrammatic view of an application of the gamut to a particular mental state, directness
  • FIG. 26 illustrates a diagrammatic view of a specific gamut list interfaced with different neurons and their relational links
  • FIG. 27 illustrates a detailed view of a particular neuron and the gamut associated therewith and the interface with various other neurons
  • FIG. 28 illustrates a diagrammatic view of an example of the interface between two systems
  • FIGS. 29 a and 29 b illustrate diagrammatic views of two types of gamuts.
  • FIG. 1 there is illustrated an overall diagrammatic view of the system of the present disclosure.
  • This system is directed toward the concept of controlling an animation engine with an animation engine 101 through the use of central animation brains 102 , one associated with a first character named “John” and with a second character named “Jane.”
  • Each of these brains 102 is capable of being trained to express emotion, that emotion being translated into control signals that can be put into the animation engine 101 .
  • a communication path 104 is provided for communicating information from each of the brains 102 over to the animation engine 101 .
  • This communication path can be any type of communication path, such as a TCP/IP protocol. Of course, it is well understood that any type of communication path can be utilized.
  • Each of the brains 102 after training, will have a character, this character being defined in a memory system 105 associated with each of the characters. In these memories is contained various experiences of the particular character and various weights. These are trained and adaptable.
  • the screen-writer's material that is defined in a script 106 basically instructs the particular brain or character to do a particular action or instructs an input to occur in the animation engine 101 .
  • the animation engine 101 can be directed to drop a box in close proximity to a particular character. This essentially is in the “virtual world” of the particular character.
  • this action can then be interpreted by the brain and experience is gained from that action through these various inputs.
  • this action in the animation engine 101 can elicit various emotional responses, which emotional responses are in direct response to the environmental changes within this virtual world proximate to the character which will then cause the brain to generate outputs to express emotion, this being facilitated by the animation engine.
  • a character 202 is provided in the virtual world defined as a series of vertices in the x, y and z direction.
  • the character 202 is basically a character that can be represented through various concepts, but has a positional relationship with respect to the environment. There will be a head which can rotate in all directions which will have perception points, those perception points being, for example, the eyes, the nose and the mouth and even the ears. In this example, only the concept of vision will be described. Therefore, there will be two points in the virtual space that represent the vision.
  • FIG. 2 b illustrates a situation wherein the individual is aware of the box in the personal environment and in proximity thereto. And, after seeing the box, viewing the box as a threat.
  • the central brain of the present disclosure distills the temperament, personality and instantaneous state of a human individual into a series of Brain Parameters. Each of these has a value varying from zero to 100 percent, and is loosely equivalent to a single neuron. These parameters collectively define the state of the person's being and specify matters of temperament and personality. Some parameters are fixed and seldom if ever change, while others change dynamically with present conditions.
  • Parameters and their definitions are not germane to the system of the present disclosure, and may include more or fewer parameters in any given implementation thereof. Numerous parameters define certain tendencies specific to certain temperaments. Some define the present emotional state, such as sense of confidence in a decision. Others are place-holders that define such things as the present topic of conversation or who the first, second or third persons in the conversation are. Yet others define physical parameters such as orientation within the environment, sense of direction, timing and the like.
  • Some brain Parameters may be loosely arranged in a hierarchical fashion, while others are not, such that altering any one parameter may affect others lower in the hierarchy. This arrangement simplifies the implementation of personality.
  • Example Parameters Table 1 illustrates a few of several hundred such parameters by the way of example.
  • the ‘Choleric’ parameter, 202 for example is ‘above’ others in the hierarchy, in that altering the percentage of Choleric temperament affects the value of many other parameters. For example, it affects the Propensity to Decide 222 .
  • Each can be treated as a neuron that may be interconnected with other (non-parameter) neurons.
  • the parameter neurons may serve in a role similar to an I/O port in a digital computer.
  • Example Decision-Related State Parameters Some of the key state parameters used in the decision process are detailed below. Some are set by personality traits, some by the context of the moment and are described elsewhere. Several have baseline values established by the Propensity to parameters.
  • Activity Threshold 237 is the minimum percentage of full-scale that a neuron must fire before it is considered a candidate for inclusion in short-term memory.
  • Base Decision Threshold 250 is a personality-based starting basis for the decision threshold. Long-term training and learning experience can raise or lower the base value.
  • Correlating Facts 235 is true if the correlator portion of the analyzer is presently correlating facts, usually in support of an analyzer decision.
  • Hottest Node 236 points to the hottest-firing neuron in the context pool (short-term memory).
  • the analyzer uses it for scaling decision thresholds.
  • Importance for Action 215 is the relative importance of making a decision. It is initially based on the propensity for importance of action, and can be scaled up down by the analyzer as the result of recent decisions.
  • Need for Completeness 260 indicates the relative need for complete (and quality) facts, prior to making a decision. Incomplete facts will cause the Conjector to make suitable guesses, but the resulting ‘facts’ will be of lower quality.
  • Urgency for Action 216 represents the urgency (not the importance) of making a decision. Higher levels of urgency make lower quality of information (and decisions) acceptable.
  • Example Temperament-Based Propensity Parameters A typical set of basic brain Parameters which indicate various propensities based upon temperament are given in Table 2, including representative contribution ratios (given as a percentage). This set of values is by no means complete and is given for the sake of description of the mechanisms of this disclosure. Other Temperament Parameters may be identified and included in this list, without altering the methods and claims of this patent.
  • the system of the present disclosure presumes the use of a node that defines the desired underlying temperament, and additional nodes that define the desired percentages of the four temperaments.
  • Table 2 is a chart of the selected typical tendencies for each of the temperaments, with each numeric value giving the approximate likelihood of the given trait to be demonstrated by the four temperaments, as a percentage.
  • the overall impact of a temperament is given by the product of the temperament's percentage, as pre-selected to produce the desired personality, and the percentage of likelihood given for each propensity from Table 2. This is demonstrated in FIGS. 4 and 5 . These may be augmented by additional variations due to the Gender 201 parameter, accounting for differences in response by male or female gender.
  • the Propensity for Amusement 210 is the tendency to be amused. The higher values lower the threshold of what is found to be amusing, triggering amusement sooner. The triggering of amusement may be reflected in the appropriate facial expressions, as provided for in the underlying brain model and skeletal mechanics, if any.
  • the Propensity for Completeness 211 is a measure of the personality's tendency to need complete facts before making a decision, and is based solely on temperament selection. It is naturally highest for the Melancholy and naturally lowest for the Sanguine or Choleric. While it is normally not altered, the underlying brain model (analyzer) can raise or lower this parameter based upon training or learning.
  • the Propensity for Determination 212 is the tendency for the brain emulation to be determined, and sets the baseline value for the sense of determination. Over time, it can be permanently altered by achievement (or failure to achieve) targets or goals.
  • the Propensity for Patience 217 is a measure of the overall tendency for patience. The level is normally high for a Phlegmatic and low for a Sanguine, but is also significantly affected by (long term) experience history. Growth in this trait parameter is very slow and is an iterative process. High levels of Patience 217 can suppress early termination of action, when faced with repeated failure to meet short- or long-term goals.
  • the Propensity for Fun 214 defines the tendency of the temperament to make decisions based on the sense of feel-good. It is temperament dependent, tends to be highest for the Sanguine, and heavily influences the impact of Rhythm Influence.
  • the Propensity for Importance of Action 215 is a measure of the temperament's tendency to find action important, whether or not all the facts needed for decision are available and with high confidence. It is naturally highest for the Choleric and naturally lowest for the Melancholy and Phlegmatic. While it is normally not altered, the underlying brain emulation can raise or lower this parameter based upon training or learning.
  • the Propensity for Urgency of Action 216 is a measure of the personality's tendency to find action important, at the expense of strong consideration or analysis of the facts. It is naturally highest for the Sanguine and naturally lowest for the Phlegmatic. While it is normally not altered, the underlying brain emulation can raise or lower this parameter based upon training or learning.
  • Patience 217 The Propensity for Patience 217 is a measure of the overall tendency for patience. The level is normally high for a Phlegmatic and low for a Sanguine, but is also significantly affected by (long term) experience history. Growth is in this trait parameter is very slow and is an iterative process. High levels of Patience 217 cause suppress early termination of action, when faced with repeated failure to meet short- or long-term goals.
  • the Propensity for Rhythm Influence 28 is a temperament-dependent parameter, and may be altered up- or downward by hyperactivity. It controls the relative effect of rhythm on the decision process. Its baseline value is relatively higher for the Sanguine.
  • the Propensity for Stability 219 is a temperament-dependent parameter that defines the tendency towards stability. When the value is high, decisions will tend to be made that lead to no net change, in the sense of foot-dragging. It also implies a tendency to procrastinate, and is a strength (or weakness) of the Phlegmatic personality. High levels of Stability 219 lead to strong loyalty towards the context-dependent authority.
  • the Propensity to Analyze 220 is determined by temperament and is not affected by other properties, except by external command. Even then, its effect is short term and rapidly trends back to the base tendency. When very high, there is a marked tendency to analyze and correlate facts before making decisions, and the confidence-based decision thresholds based on the outcome are normally raised.
  • the Propensity to Care-Take 221 is a temperament-dependent parameter, tending highest in the Phlegmatic and Sanguine. It increases the interest in acquiring people-related facts for short-term memory. The impact of this parameter is established, for example, by altering the parameters of the Clutter Filter for the context pool or short term memory.
  • the Propensity to Decide 222 is a parameter that is highest for the Choleric and Sanguine temperaments, and influences (increases) the willingness to make decisions with a minimum of facts. For the Choleric, decisions subsequently proven inferior may be altered, while for the Sanguine, the results tend to be ignored. Parameter 222 also increases the tendency to revise decisions as higher-quality facts are available, and decreases the stability in decisions and the tendency to foot-drag.
  • the Propensity to follow the Plan 223 is the (current) level of tendency to follow a plan. Its core value comes from personality traits, but is altered by such variables as stress, urgency, and external pressure. When pressure is high, as per Trauma parameter 230 , there is increased tendency to ignore the plan and to revert to personality profile-based responses. This is accomplished in a manner such as demonstrated, for example, in FIG. 5
  • the Propensity to Plan 224 is a measure of the tendency and desire to work out a plan prior to a project or task, and is a function of the temperament profile. If Propensity 34 is high, work on the task will be suspended until a plan of steps in the task is worked out. The propensity to plan does not imply a propensity to follow the plan, per 223 .
  • the Propensity to Procrastinate 225 is a measure of the tendency to procrastinate, deferring decisions and action. The primary value derives from the temperament per Table 2, and is then a fixed parameter but which may be gradually altered by experience or training. While procrastination is largely a characteristic of the Phlegmatic, it also occurs in the Melancholy decision-making process, in the absence of complete facts, and is normally very low for the Choleric.
  • the Propensity to Second-Guess 226 is a measure of the tendency to reevaluate decisions, even quality decisions, and possibly to evaluate them yet again. Temperament-dependent as shown in Table 2, it is highest in the Melancholy and typically lowest in the Choleric.
  • the Propensity to Stability of Action 227 is a measure of the tendency to maintain the status quo. Largely a Phlegmatic trait, it influences (increases) the tendency to foot-drag, and is implemented by a decreased willingness to alter plans. It may be connected to the underlying brain emulation or model as a part of the clutter or interest filter at the input of the context pool, short term memory or analyzer, suppressing new plans or suggestions that abort existing or active plans.
  • Propensity to Rest Hands on Hips 228 is a largely Melancholy trait whose more positive values increases the tendency of any attached mechanical skeleton to find a resting place for its hands, primarily on the hips or in the pockets.
  • This parameter provides a control value to the underlying brain emulation or model, which itself is responsible for the motor skill issues that carry out this tendency. That emulation or model actually determines whether or not this tendency is carried out.
  • parameters in Table 2 are directly controlled by one or more of the four underlying temperament selection parameters. They are scaled by percentages such as those also given by example in Table 2. They are then distributed by the brain model to the appropriate control points, filters and selectors within the underlying brain emulation or model.
  • These parameters may be derived from temperament, context, environmental and current-condition parameters, for example, although other means will become obvious during this discussion.
  • the parameters of Table 3 are exemplary. Most parameters in this table decay over time to the values shown at the right. These decay targets are nominal and may be altered through preemptive training. They derive from temperament percentages in a similar manner to Table 2. The list is by no means exhaustive or complete, and others will also become obvious during this discussion.
  • the current derived parameter values are distributed to the appropriate control points, filters and selectors within the brain emulation or model. In some cases, they control decision or stability thresholds, or establish the statistical settings, such as per 42 of FIG. 3 , for current-interest filters in the emulated brain, and to other such brain emulation functions.
  • the composite impact of these temperament and temperament-derived parameters determines the composite personality of the emulated brain.
  • the Base Decision Threshold parameter 250 is the starting basis for many decisions. It is the typical starting decision threshold, and is a measure of confidence or information completeness that must be obtained before a decision will be made.
  • the threshold is given as a percentage, 0 . . . 100%, whose application depends upon the types of decisions being made. In some places it is used as an absolute threshold, or may specify a figure of confidence in the present facts, a figure that must be exceeded before a decision may be made.
  • the Concentration Ability parameter 251 is a measure of the ability to concentrate. A more positive value raises the threshold of attention to outside distractions, those unrelated to the issues in short term (or current context) memory in the underlying brain model or emulation. It is used by both the analyzer 30 and the clutter filter 40 .
  • Docility 252 is a measure of the overall propensity for stability during external emotional pressure. It contains a long-term filter that decays back to the base value. Positive Docility 252 greatly increases the threshold of attention to emotional trigger events. Docility 252 can be altered over moderate periods of time, but tends to return to its temperament-defined static value. When this value falls lower than its average setting, there is an increasing tendency to ignore learned responses and to revert to personality profile-based responses.
  • Hyperactivity 253 is a measure of current levels of hyperactivity, as would be normally defined by someone skilled in the art. It is established by a programmable value and subsequently augmented by temperament percentages. Hyperactivity is also influenced by Docility 252 and current emotional stress. These sources are the primary determiners for the base value of hyperactivity, but long-term training or experience can alter the value. Choleric and Sanguine temperaments have relatively higher values, while Melancholy and Phlegmatic values are quite low.
  • Hyperactivity 253 is implemented, for example, by introduction of (typically negative) random variations in the magnitude of selected decision thresholds. It also alters the time constants of task-step performance and present rhythm parameters, with additional ultimate impact upon the performance of motor tasks.
  • Filter Organizational Detail 255 specifies the filtering of organizational detail from incoming information, context pool or short-term memory for the brain emulation. A value below 100% removes the greatest percentage of detail.
  • Filter Human Interest 256 specifies the filtering of human-interest data from the incoming information, context pool or short-term memory in the emulated brain. 100% removes most human-interest information. The value will be highest for Choleric models and lowest for Sanguine temperaments.
  • Filter Relational Detail 258 specifies the filtering of detail about inter-relationships between facts from the incoming information, context pool or short-term memory. 100% removes most detail. The value is highest for Phlegmatic and Sanguine models and lowest for the Melancholy models. Higher levels inhibit the correlation of distant facts that are nonetheless related. Lower levels also encourage the analyzer 30 to spawn events to event memory 14 . This has the effect of iteratively revisiting the same information to analyze short-term memory for better correlation of data.
  • Filter Technical Detail 259 specifies the filtering of technical detail from the incoming information, context pool or short-term memory for the brain emulation. 100% removes most detail. The value is highest for Choleric and Sanguine models, and lowest for Melancholy models.
  • the Need for Completeness parameter 260 establishes the required level of completeness of information before making a decision. A higher value of completeness increases the likelihood of deferring a decision until all the facts are available, sometimes stymieing or stalling a decision. Other parameters related to importance and urgency can alter this parameter. The need for completeness can be altered by a decision of the analyzer 30 , and upon external command to the brain emulation, such as through 93 .
  • the need 260 drifts backwards to the value set by the propensity for completeness.
  • the need also reverts to the propensity value after a decision has been made. 100% implies the highest need for completeness. It is highest for Melancholy and lowest for Choleric and Sanguine models.
  • Patience With Detail 261 is the present level of patience. Its baseline value derives from the propensity for patience. It is affected by present conditions and can be commanded to rise. It largely alters decision thresholds, and values near 100% imply comfort with detail. The value is dynamic and tends highest for the Melancholy and lowest for Sanguine and Choleric.
  • Procrastination Level 262 is a measure of the present level of procrastination. Its base value is set by the propensity to procrastinate, is increased by uncertainty, and decreased by impatience. Procrastination defers decisions and postpones actions that are not otherwise inhibited by circumstances. Decision choices are implemented in a manner similar to 42 of FIG. 3 . Higher values of this level postpone decisions, even in the presence of hard facts (high sense of certainty).
  • FIG. 4 One implementation of the underlying functional model of the brain is diagrammed in FIG. 4 .
  • Three primary elements of the model are analyzer/correlator 30 , the context pool memory 10 , and the English semantic analyzer 50 .
  • English is always used where the processing of external communications are involved, whether in complete sentences or in sentence fragments. Internally, the system is essentially language independent, except where linguistics, phonics, the spelling of words or the shape of letters used in the language are involved. For ease of initial implementation, English was used, but essentially identical processes can be applied to any human language of choice. The choice of language in no way limits the disclosure for purposes of this patent. Indeed, the methods of this patent can be applied to autonomously translate one human language to another.
  • the Clutter Filter 40 plays a central role in determining what types of information are actually considered in the brain.
  • operation of the analyzer/correlator 30 is controlled or heavily influenced by personality state parameters 22 . These same parameters may themselves by the results of analyzer 30 , in many cases.
  • the flow of external information enters through the semantic analyzer 50 .
  • This distills content and intent from both English sentences and sentence fragments, and formats the distillate for inclusion into short-term memory 10 .
  • the biological neuron there is an in-place ‘firing’ of a neuron when the associated fact is recognized.
  • a specific neuron represents a common rabbit
  • the firing of a biological neuron implies recognition of that rabbit.
  • the degree of firing (or output) represents the degree of certainty with which the rabbit is recognized.
  • the entire long-term memory 12 could be composed of read-only or slow flash memory, because recognition does not involve a change of the neuron's state in that memory.
  • recognition takes place by the existence, recognition or correlation of data within the context pool memory 10 .
  • Any reference to a ‘firing neuro’ is to be construed as placement of a reference to (address-of) that neuron into context pool 10 , along with a current firing level for it.
  • Neurons and Reference Indices Every neuron records two types of information. The existence of a specific fact is implied by the fact that a neuron to represent that was defined at all. Experiences are implied by the relationships and linkages formed between neurons. Individual neurons are emulated by some fixed-size base information, and a variable number of relational connection records, as shown in FIG. 9 . Relational conditions may be conditional, predicated upon the state of other neurons, and reference the ID indices of both their target neurons and condition triggers.
  • All neurons have a unique address, but it may be changed from time to time as memory is reorganized. Further, the very existence of some neurons is tentative. They may disappear unless reinforced over a period of time, and are located in the reinforcement memory 11 . Because their precise locations are unstable, references of one neuron by another could be problematic. Further, the relative size of a neuron can vary widely, depending upon the inter-relationships and context with other neurons.
  • a unique and unchanging index is allocated for each neuron created. References between neurons use this permanent index to inter-reference each other. If a neuron is deleted (in reinforcement memory 11 ), the index is reclaimed for later reuse. A specific bit within the index value indicates whether it refers to a normal permanent neuron or to the reinforcement memory 11 . A fixed subset of the indices to the reinforcement memory ‘tentative’ neurons are also reserved and used to indicate information block type and format within the context pool 10 .
  • Neurons in the reinforcement memory 11 that have been reinforced over a period of time are made permanent by the analyzer/correlator 30 .
  • the analyzer then moves them to permanent memory 12 and alters all references to its index to show that it has been so moved. References within that neuron may themselves not survive the reinforcement process, and may be deleted during the transfer. Refer to Table 4 for data stored with the individual neuron.
  • the analyzer/correlator 30 repeatedly scans context pool memory 10 for both unprocessed information and for activities suspended while awaiting occurrence of certain events or conditions. It also updates brain parameters both to keep them current and to check for relevant changes of substance.
  • a ‘block’ may simply be a reference to a single neuron, and its firing level.
  • Basic information may include references to explicit spellings (e.g., a walk-back index to the text-tree for the word), pronunciation exceptions, visual-object descriptors and the like. Certain flags and start-indices for lexical matters and the like are also included here.
  • Relational Linkages The weighted and conditional influence of this neuron upon another is defined by relational linkages, of which there may be up to 1000 or more, for some neurons. Each new experience and relationship learned has a relational linkage created for it. Initially, these relationships are created in the reinforcement memory, where they remain until later validated and moved to long-term memory (or are deleted). Relationals in reinforcement memory may refer to neurons in either memory, but those in long-term memory may refer only to other neurons in long-term memory. The Analyzer tracks the allocation, aging, validation, and ‘garbage- collection’ processes, and these are discussed in detail elsewhere.
  • Individual neurons are emulated by some fixed-size base information, and a variable number of relational connection records.
  • the latter may be conditional, predicated upon the state of other neurons, and reference the ID indices of both their target and conditional neurons.
  • Context Pool Memory 10 The core of all emulation occurs in the context pool (short term) memory 10 and the analyzer/correlator 30 . All information of immediate awareness to the emulator resides in that context pool 10 . Neuron-like firing is implied by the very existence within the context pool 10 of a reference to a neuron from long-term memory 12 . Information (blocks) enter the context pool 10 serially, as it were, but are processed in parallel by the analyzer 30 .
  • Data may be placed into the context pool 10 from a number of sources, the initial one of which is often the semantic analyzer 50 . Except for inputs from the analyzer 30 , all context pool information is filtered by a clutter filter 40 , which largely keeps irrelevant or non-interesting data from reaching the context pool 10 .
  • Data in the context pool take the of form block-like structures of predefined format.
  • a block arriving from the semantic analyzer 50 contains the intent of a sentence, independent clause or sentence fragment.
  • a one-word reply to a question is fully meaningful as such a fragment.
  • Such a sentence block may contain references to a speaker, the person spoken to, and possibly, references to the person or object discussed. Many combinations of this and other sentence data are possible.
  • Blocks from analyzer 50 frequently include the purpose of the sentence, such as query (and expected type of answer), command, factual declarations, observations and the like. This type of data is discrete and readily identifiable by the semantic parse.
  • the length of the context pool is determined empirically by the application, but is nominally sufficient to handle a number of hours of intense study, or approximately a day of casual interaction. To put sizes into context, this represents roughly a megabyte of conventional digital storage, although selected size does not alter the means or methods of this patent.
  • the context pool 10 gradually drains, with neural firings gradually fading to zero. As neural references fade to zero, they are removed from the context pool, as suggested bio-mimetically.
  • New information may be introduced during sleep by the dreamer block 75 .
  • Dreamer-derived information created during deep sleep decays rapidly when awake, at rates different from normal context pool data decay. If the sleep time is insufficient, yet-active neural firings remain into the following wake cycle; these are handled as previously described.
  • a language semantic analyzer 50 accepts communications in the natural language of implementation, English, for example. It breaks down sentences, clauses, and phrases to derive intent and purpose from the sentence. It uses the context of the current conversation or interaction by polling the analyzer 30 , long-term memory 12 and reinforcement memory 11 . Access to present context is obtained indirectly from the context pool via analyzer 30 . Interpretation of language words is weighted by the presence of their associated neurons in the context pool, yielding context-accurate interpretations.
  • language semantic analyzer 50 could be hard-coded in logic, it is beneficial for many applications that it be implemented as an embedded processor. This method is not required for the purposes of this disclosure, but is a convenience for the parse and interpretation of languages other than the initial design language.
  • Analyzer 50 uses this overall grammar to interpret the intent of the communications.
  • Non-natural languages are often parsed by separate lexical and grammar parsers, using such commercial tools as Lex and Yacc. These were deemed burdensome and unwieldy for parses within the system of the present disclosure.
  • an alternative parser (Lingua, a commercial parser and not the subject of this disclosure) was created. Using Lingua, a highly complete description of English grammar was defined and serves as the basis for language semantic analyzer 50 . The intellectual property contained therein is a definition of English grammar itself, although it is also not the subject of this disclosure.
  • semantic analyzer 50 makes use of context-dependent information for a more accurate rendering of intent from the text.
  • Semantic analyzer 50 takes in natural language sentences, clauses, phrases and words, and emits blocks of decoded neuron references and inferred intent. In large measure, the non-changing and fundamental grammar words are discarded after they have served their purpose in the parsing. Similarly, structural constructs within sentences are often discarded after their implications have been gleaned. Finally, pronoun references such as he and it are replaced by references to neurons representing the resolution targets, such as “David Hempstead” or “rabbit”.
  • the semantic analyzer indirectly references both long term 12 and the “21-day” reinforcement memory 11 , and can extract relational information from either, to determine meaning and intent of specific words. It places greater weight on words whose neural references are already firing within the context pool 10 .
  • the definitions of English (or other natural language) grammar are contained in a definition file in a variant of the Baccus-Nauer Format (BNF). Refer to FIG. 5 for an example fragment of such a definition.
  • BNF Baccus-Nauer Format
  • FIG. 5 An example fragment of such a definition.
  • the example was implemented using the Lingua compiler, a commercial product of Neuric Technologies.
  • An example of diagnostic results obtained from parsing the sentence, “The table failed.” is given in FIG. 6 , showing the iterative nature of the parser used in the commercial Lingua product.
  • context pool 10 data may be blocked into inferred facts and data.
  • Preprocessing in semantic analyzer 50 will have already converted sentence fragments into complete sentences, or will have flagged the fragments for expansion by the Conjector.
  • Each sentence block is usually a complete sentence, with subject and predicate. Implied you subjects have had the subject resolved and appropriate neuron reference substituted. The implied It is prefix, that turns a noun-clause (e.g., an answer to a question) into a full sentence, would also have been added as needed. All sentence blocks are standardized in form, with inferred sentence information reordered into that form.
  • the blocks are of variable length, with flags that indicate the sentence data being stored. Some of this information is gleaned from state parameters.
  • the sentence type dictates which items are optional. Types include Declaration, Question, Exclamation, Observation, Accusation, Answer to Query, and yet others.
  • Other sentence data may include the following (and other) information:
  • Subject Gender (Male, Female, Object)
  • Target of Action including Person, Count, Gender
  • All noun-like items also contain the person, count, and gender flags. These sentence blocks are interpreted by the analyzer/correlator 30 and the conjector 70 as commands for interpretation. Some of these are described in the discussion about Table 7 contents.
  • the Sentence Recognition Process Regardless of whether the sentence was obtained through written text or from auditory speech, recognition and understanding of sentence content is roughly the same. The greatest differences are the additional cross-checks, validations, and filters imposed on spoken speech.
  • a general communications triad is defined: The speaker, the person/object spoken to (e.g., the receiver of commands), and the person, object or subject spoken of. Most of this information can be inferred from sentence content, from the present context pool 10 , and from state parameters 20 and 23 .
  • Parse Parse the sentence using language grammar rules, such as in FIG. 5 .
  • Extract Structural Elements Extract key structural elements, discarding semantic information. Store the data in appropriate blocks or neuron references for use by the correlators 30 and 75 .
  • Compile Definitives Compile any definitive sentences into relational and qualifier constituents, storing the relational associations (if any) with the relevant fact neurons. This is done indirectly by submitting an appropriate directive to the context pool 10 .
  • the above basic process is exemplary of a portion of the typical activity for parsing a sentence and generating information or command blocks for inclusion in the context pool 10 .
  • Clutter Filter 40 acts to limit entry of certain types of information into context pool 10 .
  • Information entering the context pool 10 must pass through the clutter filter 40 , except for that emitted by analyzer 30 .
  • the purpose of the filter 40 is to remove extraneous neurons, such as language or grammatical tokens and non-significant gesture information.
  • the clutter filter 40 follows preset heuristics which may either be fixed or adaptable.
  • the result of the filter is to maximize the consideration of relevant information and to minimize ‘mental clutter’ and things of little interest to the personality being modeled.
  • Choleric temperaments for example, do not thrive on human-interest information as the Sanguine does. Data so identified may be removed in keeping with current parameter conditions. This may occur during the course of conversational exchange, during which time semantic analyzer 50 or other sources flags the data on the basis of the topic of discussion.
  • the clutter filter is a substantial contributor to the emulation differences in right-brained and left-brained activity, second in this only to the work of analyzer/correlator 30 .
  • the filter During interaction with the outside world, a large number of neurons are referenced from memory and submitted to the context pool 10 for analysis, correlation, conjecture and dreaming.
  • the filter considers the type and groupings of neurons being submitted, as well as some of the inhibitor factors, and may opt to discard them instead forwarding them to the context pool 10 .
  • outputs from the dreamer 75 are given very low priority, unless overall levels of neural firings in the context pool 10 are very low.
  • analyzer/correlator 30 governs overall thought (and memory) processes and normally does not produce clutter.
  • the filter also prioritizes incoming information. Entire contents of answers to questions are also likely to be passed through, whereas the same material might not ordinarily be.
  • the primary basis of determination of what constitutes ‘clutter’ is the personality parameters 20 , a subset of the state parameters 22 . (In FIG. 4 , they are shown separately from other parameters for emphasis and clarity, but are essentially are the same.) Logic such as that of FIG. 3 demonstrates one means by which the clutter determination may be made. It will be obvious to one skilled in the art that the clutter filter 40 as described here can be augmented with additional rules and heuristics without altering the basic disclosures of this patent.
  • Analyzer/Correlator 30 The analyzer/correlator 30 is the heart of the emulated brain, and is the primary center of activity for thought processes. It is also the primary means for updating of all dynamic brain parameters and is the only means for initiating permanent storage of information.
  • analyzer 30 uses information from conjector 70 , although results using the latter are also of lower quality.
  • Entry of a neuron reference into the context pool memory 10 initiates a sequence of events unique to the neuron and its associated relational (experiential) linkages, or ‘relationals’. Detailed later, these often make use of the event queue memory 14 to handle the implications of their connections.
  • analyzer/correlator 30 can be implemented as hard-coded logic, a form of command interpreter, or as an embedded processor without altering the means of this disclosure.
  • ‘keep’) count is maintained with the neuron in memory 11. Decay 21-day Periodically (e.g., during sleep intervals) decay the ‘keep’ count for all References neurons in the reinforcement memory 11, to enforce the need for reinforcement of learned information. Create a Permanent Neurons in reinforcement memory 11 that have satisfied their reference Neuron count level are made permanent by moving them to long-term memory 12, updating their references, and removing them from short term memory. Initiate an Event Certain conditions, particularly due to neuron relationals, and some types of sentences, cause events to be queued to the event memory 14. The queuing is normally for execution after specified delay, awaiting the meeting of the conditions pending.
  • Conjecture Clutter Commands or references created by the conjector 70 are correlated for Removal relevance, and discarded for low relevance to the target subject(s).
  • Dream Clutter While awake, information and command fragments from dreamer 75 Removal are rapidly decayed. During sleep periods, perceived accuracy of these items is increased and treated as ordinary and factual information, but motor-skill related commands are suppressed.
  • analyzer/correlator 30 maintains and updates numerous lists, such as present subjects of conversation or inquiry, the status of pending answers to questions issued, maintenance and completion status of motor skill activity, and the like. Its primary source of information and commands comes from the present contents of the context pool 10 .
  • Context Pool Commands Within context pool 10 , information and facts are stored in the generic form as neuron references, neural indices. Both state parameters 22 and context pool commands are encoded as dedicated lower values of neural indices. The commands are variable in length, with their index followed by length and supporting information.
  • Sentences may be distilled into multiple commands, each complete with neural references. Implied subjects, verbs or objects are resolved with references to relevant neurons. For sentences with multiple subjects, verbs or objects, the sentence content is replicated, with one copy per item in the subject list, for example.
  • Execution commands are always flagged by their source, such as a speech or grammar analyzer, the Analyzer or Correlator 30 , the Conjector 70 , Dreamer 75 and so on.
  • the Analyzer 30 later considers the source when applying the command during its thought or decision processes. Exemplary commands from semantic analyzer 50 are given below, these particular ones being based upon sentence types.
  • Declarative 231 is an instruction to consider a present condition about the subject. It may also be a part of an experience process, ultimately culminating in the creation of a neuron-to-neuron or neuron-to-state-parameter relationships. This command is usually created by the parsing of a sentence, but can also be created by thought processes within analyzer 30 .
  • Declaratives may result in a remembered relationship, in time and with reaffirmation, and through conjector 70 's action. That is, declaratives are ‘taken with a grain of salt’, and consider confidence in the source of the observation. They differ from the definitive 233 in that the latter is already presumed to be a source of facts, and only the reliability of (confidence in) the information needs to be confirmed before remembering it.
  • Imperative 232 instructs analyzer 30 to the brain emulation to do something, such as to consider a proposal, pay attention, recall something, or to conjecture an answer to an issue with insufficient information. It is a command for action of some type, directed towards the brain emulation.
  • a command such as ‘Come here!’ must be evaluated in the present context. It implies activation of a motor-skill list to begin physical motion, and targets the location of the speaker. The latter may not be in the context pool 10 , but is maintained in a state parameter 22 .
  • analyzer 30 directs the motor skill via task list 13 . It can then, for example, issue an await-on-completion event 142 and dismiss the command from memory. It will later receive a completion message (or a notation that it encountered a brick wall or other impediment to carrying out the instruction), closing the command.
  • Definitive 233 indicates definition of a fact (in reinforcement memory 11 ), and may include auxiliary conditional relational information.
  • Declarative 231 portion “A cat is an animal withfourpaws,” defines these neurons if they are not already known: Cat, Animal and Paws. Even if the meanings of Animal or Paws are unknown, they can still be remembered, and the suitable relationals later formed between them. These are all recorded in reinforcement memory 11 , if not already there and not known in long-term memory 12 .
  • reinforcement memory 11 If already in reinforcement memory 11 , their existence is reaffirmed to encourage possible permanent recollection. If the veracity of the speaker is high, less time is required to reinforce the facts. If the system is in preemptive training mode, these are assumed to be pristine facts, perhaps from God, and are immediately and permanently remembered.
  • the declarative 231 portion “The front (cat) paws are commonly called forepaws,” also forms a definition, but must be reaffirmed to a greater degree than for the definitive clause. (Because parsing has already been performed, the explicit subject defined at the start of the sentence has already been associated with the trailing clause, too, by semantic analyzer 50 .)
  • Interrogative 234 poses questions and requests. These are normally injected into context pool 10 by the grammar semantic parser 50 , but may also be queries from other sources. Many (but not all) questions are simply a declarative statement with a question indicated, and are often formed by a restructuring of a simple declarative sentence.
  • the parser 50 sorts questions into those seeking affirmation (yes/no) or seeking specific information, and presents them to the context memory 10 as declaratives 231 marked for validation or as an imperative 234 demanding an informative response. In either case, analyzer 30 only sees data constructs for the latter forms, and so marked as questions so that it can form its response to the question.
  • analyzer 30 loosely taking on the form of a von Neumann processor, with the ‘program’ being the command stream from the English parser, or from other blocks.
  • analyzer 30 In communicating with brain emulators that share common memory 12 , their analyzer 30 can forward ‘digested’ command blocks directly to the context pool of this emulator. If communicating with the outside world via external interface 98 , analyzer 30 reformats the command block into an English sentence for parsing there, and receives English back via interface 93 .
  • Neurons and the Context Pool 10 Neurons and the Context Pool 10 .
  • Conditionals expect a specific neuron (or combination of neurons) to be fired.
  • State parameters 20 and 22 are pseudo-neurons, and preexist all allocated neurons. They are treated as neurons, and are assigned the lowest index ID numbers, but have no relational (experiential) links created for them.
  • the ID of every firing neuron (except for state parameters 20 and 22 ), along with some information specific to the neuron, is maintained in the context pool 10 , including the degree of firing.
  • Aged neurons in context pool 10 that are no longer firing are eliminated from the pool memory, usually while ‘sleeping’. Neurons that are firing but are not being reaffirmed or re-fired in the context pool 10 have no effect, other than to establish the context of the moment. For example, they may be the subject of a conditional test, or may alter the contextual meaning of a sentence being parsed.
  • Bidirectional Relationals Where relationships are bidirectional, neurons or state parameters at both ends of the relational will issue events. If any conditions specified are not met, no event is fired off. For bidirectional relationships, A implies B, and B implies A. In either case, the relationships may be conditional, predicated on other neurons also firing. Referring to FIG. 10 , a relational link 1253 is created within the both neurons in the relationship, each referring to the other.
  • Analyzer/correlator 30 maintains other lists of information in short-term memory similar to that of the state parameters 22 , which are also treated as blocks of predefined neurons. These have been discussed elsewhere within this patent and include list such as the following:
  • the context pool 10 When a new command is added to the context pool 10 , it usually contains a reference to a neuron that represents a fact or condition of existence. Usually it will reference more than one. Each such reference either brings the neuron ‘into the pool’, or reaffirms neurons already in the context pool 10 .
  • Recognition of a person's face brings the ID of that person into the context pool 10 , firing the relevant neuron in accordance with the degree of confidence in the recognition. (e.g., “That might be Jackie, over there.”) Shortly thereafter, hearing the same person's voice increases the confidence of the identification. The firing of that person's neuron (ID) may therefore increase from perhaps 65% to 95%. Ongoing interaction with that person keeps his ID alive in the context pool 10 .
  • firing level is controlled by the values of the referencing neurons (e.g, Green, Animal or Swamp), and the weight given in the relational connections. That is, the Alligator neuron will fire weakly if Florida (which might imply Swamp) is firing weakly, although nothing else directly activated Swamp.
  • Analyzer 30 effectively acts as a correlator by walking through the connections of all firing neurons, awakening other neurons as long as firings are not suppressed by conditional relationships.
  • the analyzer 30 causes any neuron not reaffirmed or re-fired over time to gradually decrease its firing level. That neuron is then ejected from the context pool 10 if it goes to zero. It is also dumped from memory if it is still firing but has been there a long time and the context pool 10 is full.
  • Reinforcement memory is a way-point in the process of learning and remembering things. All new information and relationships are established in reinforcement memory 11 , and it serves as a filter for items important enough for later recall. Analyzer 30 handles this process.
  • the reinforcement memory 11 is a means of eliminating non-essential facts, relationships and incidents otherwise uselessly cluttering permanent memory.
  • the ultimate growth of long-term memory 12 is then moderated, keeping the mental processes and memory more efficient.
  • the long-term memory 12 and the reinforcement memory 11 share a more or less common format. Allocation of neurons and relationals are handled entirely by analyzer 30 , and policies that govern permanent retention reside there.
  • “Other” tables include specialty tables associated with a single neuron and used for recall of motor-skill task lists, aural or visual artifacts or objects, and the like. Their format is specific to the emulator type (e.g., visual, speech or motor-skill) that produces them, but they follow the standard processing and correlation rules for ordinary neurons.
  • a Laptop neuron is meaningless of itself (except for spelling, pronunciation and visual shape), but has importance because of its relationships to Computer, Portable, and Convenient.
  • Predicate Transitive_Verb Direct_Object, or, Intransitive_Verb Indirect_Object, or, Intransitive verb.
  • this system permits the creator of a parser to “drill down” to any desired level of detail, and can be extended as desired to accommodate all parts of speech. It is generally accepted that such formalized description of a grammar is then automatically translated into some computer language. Example commercial tools that do this include Lingua, Yacc and Lex. Such tools then create computer code (for example, in C++) for a program that parses a sentence.
  • one neuron is preferably allocated to each word in English, at least for words presently recognized by the underlying parser. Such a neuron is merely a place-holder and has no intrinsic value of itself, but its importance is its connection to other such neurons. The system of interconnected relationships between the neurons is a fundamental part of what constitutes memory for the artificial human.
  • the word muddy can be construed as any of, “to contain mud”, “to be mud-covered”, “mud-like”, “unclear” or other concepts.
  • it a conditional link could be created between mud and boot neurons. In that manner, simple reference to the term boot can evoke the idea of mud.
  • Conditional relationships can also be created between boot and rain, boot and snow, and so on.
  • Idioms can be handled in a similar manner.
  • a single neuron can be assigned to represent a multi-word idiom, and associations can be then drawn between that “idiom” neuron and other neurons whose meaning (and relationships) are already established.
  • off the wall can be stored as a single neuron (with the text, “off the wall” attached to it), and that neuron can then be conditionally linked to strange, unusual and offbeat neurons.
  • association can be unidirectional or bi-directional, and may be conditional or unconditional.
  • mud can be associated with boot such that reference to boot implies mud, but such that mud need not imply boot.
  • multiple associations may be drawn between a neuron and one or many other neurons. Each specific association is uni- or bi-directional and is predicated on the firing of some other neuron, or not.
  • idioms can be parsed as a sequence of individual constituents. Ascertaining their meaning from their individual constituents is extremely difficult, however, if not impossible. True idioms show three qualities, all causing difficulty with parsing:
  • a scanner device is required that can search for either an explicit sequence of words (e.g., “off the wall”), or—failing that—for a single word. If such a sequence is indeed found, such as might be associated with an ‘idiom’ neuron, that neuron is noted and may subsequently be fired, turned on. If only a single word of the sequence is matched against a neuron (or an item's text sequence), e.g., “off”, then the neuron for the offword is identified instead.
  • the example word off can be readily classified as a single word, or as a multi-word idiom, off the wall.
  • Either such neuron may/will contain associations with other neurons, and those associations give the final meaning to the word or idiom.
  • the neuron for under the weather is likely to be connected to (and fire) a neuron that implies ‘ill health’ or ‘not-feeling-good’ neuron, or to reduce the firing of a ‘sense-of-health’ neuron.
  • a neuron that implies ‘ill health’ or ‘not-feeling-good’ neuron or to reduce the firing of a ‘sense-of-health’ neuron.
  • other neurons to be fired by the recognition of an idiom is determined both by the context within the sentence and by the present connections (knowledge) of the neural network in which it is implemented.
  • Every neuron is assigned a serial number 127 , something of no significance in itself.
  • Each relational connection to another neuron uses that unchanging serial number as an ID. From the ID, spelling, pronunciation and other relevant information is obtained.
  • the ID table 126 When memory is implemented as digital memory, the ID table 126 is located preferably at the base of that memory and consumes a predetermined and finite logical space. It is sized to have one element for every possible neuron. In reality, memory can be resized as more is made physically available, with suitable offsets applied to the resolution value for each ID in the table 126 . For each index 127 , the corresponding offset into the ID table 126 contains a neuron's address in the neuron table 125 .
  • a vocabulary of 30,000 words is an acceptable working size when words alone are considered. For some people, up to 300,000 unique words are known.
  • Each concept, e.g., “off the wall” to be remembered has its own index, as do words, remembered events or conditions; each corresponds to a unique neuron record 1250 in the neuron table 125 .
  • neurons 1250 are emulated by fixed-size information block 1251 , and a variable number of relational connection records 1252 .
  • the latter may be conditional, predicated upon the state of other neurons. They may reference the ID indices 127 of both their target and conditional neurons.
  • these technology-dependent linkage-pointer structures may be superfluous and may be eliminated or replaced.
  • Basic information 1251 may include references to explicit spellings (e.g., a walk-back index to the text-tree for the word), pronunciation exceptions, visual-object descriptors and the like. Certain flags and start-indices for lexical matters and the like are also included here.
  • the relational 1252 is a link between two neurons. It may also be a link between a neuron and a state parameter. Relationals may be unidirectional or bidirectional in nature, and may be performed only if a specified set of conditions are met. Relationals are loosely suggested by the biological neural dendron.
  • relationals 1252 When implemented in digital memory, it is convenient that relationals 1252 are allocated in the space immediately behind the fixed-length portion of a neuron record 1251 . Normally there a blank space is reserved there in anticipation of relational records insertions. Before inserting a new relational 1252 , analyzer 30 checks for sufficient room and, if not, reallocates the entire neuron with greater space.
  • the length of the relational detail block 1252 is variable, depending upon the type and number of relational connections made to other neurons. It not unreasonable that total (digital) memory may consume sixteen (16) megabytes to two (2) or three (3) gigabytes.
  • Relationals 1252 have an AND-OR organization. AND-connected relational records are grouped together following the fixed-length portion of the neuron.
  • a specific target ID 1256 is generically defined to represent the OR condition, with the remainder of that ‘relational’ record ignored. As stated elsewhere in this discussion, certain neuron IDs are reserved for such special purposes as this. Similarly, certain values of the weight 1257 are reserved to indicate an INHIBIT condition, and the weights themselves may be negative, to reduce the level of recognition, the firing level.
  • the relational 1253 is unidirectional.
  • the neuron 1250 it is a part of is fired to the degree that the neuron referenced by target ID 1256 fires. However, the firing of this neuron 1250 does not otherwise affect the target ID 1256 .
  • Grass may imply Green, but Green does not imply Grass.
  • analyzer 30 creates a suitable relational for each of the two neurons, each pointing back to the other. This is akin in software to a doubly-linked list.
  • relational linkages 1252 The weighted and conditional influence of this neuron upon another is defined by relational linkages 1252 , of which there may be up to one-thousand (1000) or more for some neurons. Each new experience and relationship that is learned has a new relational linkage created for it. The garbage collection and management of neuron-relational memory spaces is discussed elsewhere in this patent.
  • Relationals 1252 in reinforcement memory 11 may refer to neurons in either memory, but those in long-term memory 12 may refer only to other neurons in long-term memory 12 .
  • Analyzer 30 tracks allocation, aging, validation, and ‘garbage-collection’ processes, as discussed in detail elsewhere.
  • both reinforcement 11 and long-term memories 12 may hold other encapsulated information. These data blocks are treated and referenced as ordinary neurons, but contain extended structures for efficient later recall of compound and complex entities. Details of each of these are discussed with the description of their relevant neurons.
  • the neuron process for recognition of sight and sound is by reconstructive correlation, matching a reference image, or sound against a known object or sound.
  • Memory storage is ‘reconstructive’ in that actual sampled sounds or pixilated images are not stored. Rather, sufficient information to reconstruct a reference object (for comparison purposes) is remembered.
  • Stored images and sounds then consist of lists of object artifacts rather than detailed information on them. The degree of match or similarity determines the neuron's firing level.
  • Task Lists are lists of actions to be taken, to carry out repetitive or learned Motor Skills tasks. They are specific to supporting emulators, such as those that handle motor skills or musical abilities. Task lists are usually coupled tightly to sensory processes, and can be started, interrupted or stopped by the main brain model. Aural Artifacts These are descriptors of basic sounds, including such things as phonemes, ADSR rules and the like. They are not complete words or sounds.
  • Visual Artifacts This is an arbitrary set of visual elements used to recognize more complex objects.
  • the artifacts may include lines at various angles, facial and nose shapes, alphabetic outlines, and the like. They are elements used for the reconstruction of visual images, of the minimum detail needed to perform image correlation matching.
  • Visual Objects These are descriptions of complete visual objects, but of minimal detail needed to recognize them. For example, to recognize a specific face, only a portion of the eyes, nose and chin or cheekbone may be required.
  • This reconstruction object is connected to the neuron for a specific person, for example, attaching the face to its identity. The connection is done via bidirectional conditional link.
  • a correlation template is recreated from the stored table information and applied to the appropriate correlator. This may be, for example, a vector skeleton for use by the visual correlator for image identification. The neuron fires in proportion to the degree of match.
  • Event Queue and Memory 14 are special-purpose commands issued to a queue 14 . They are slated for later execution at a specific time, after a specified delay or after a specified set of conditions are met. They are the means by which unwanted looping over information in the context pool memory 10 is circumvented.
  • An event is simply a marker or flag set down to remind the system to do something when a specified condition is met. It greatly simplifies the handling of actions that are asynchronous with each other.
  • the analyzer 30 may issue one or more events to the event pool 14 .
  • the analyzer may create an event that adds new reference back into the context pool 10 . It could also issue a conditional event to later force the analyzer 30 itself to iteratively rescan the context pool 10 , such as may be done for an analytical temperament such as the Melancholy.
  • the same mechanism is also used for establishing conditional relationships between neurons, or between neurons 1250 and state parameters 22 . Events can be generated by the alteration of state parameters 22 . By issuing events for future execution, the analyzer 30 avoids getting side-tracked from the task at hand being worked.
  • the event queue 14 consists of an interpreter 140 and an event list 141 . Creation of an event causes an event 142 to be inserted in the event list. Events 142 in the list 141 consist of a command field and other optional fields shown in FIG. 12 .
  • the interpreter 140 repeatedly scans the event list 141 for events 142 that can be processed. Whether or not they can be processed is determined by the conditions and timing fields.
  • the auxiliary data field if present, contains information unique to the event type.
  • interpreter 140 After interpreter 140 has scanned to the end of event list 141 , it restarts scanning at the beginning. If no events 142 are left to process, it awaits the creation of a new event 142 .
  • the event queue 14 can be implemented as hard-coded logic, as a micro-coded processor, a software emulation, an embedded processor, FPGA, ASIC, optical or other technology of choice, without altering the means of this disclosure.
  • Conjector 70 proposes decisions based upon incomplete or partial facts, or facts of low confidence. While the analyzer 30 is the main thinking facility for the emulator, it takes advice and proposals from both the conjector 70 and dreamer 75 blocks. Proposals from the conjector 70 are filtered by clutter filter 40 on the basis of temperament and personality.
  • analyzer/correlator 30 acts on the sentence block to determine a suitable course of action where appropriate. If it ‘comes up dry’, the analyzer 30 invokes the conjector 70 to suggest a valid meaning. If the resulting quality of the conjector 70 output is too low, analyzer 30 may direct the communications interface 98 to ask for clarification. It sets an appropriate parameter flags to await an answer to the question of clarification.
  • Conjector 70 output is similar to any normal neuron reference or sensory nerve that is firing at a relatively low level for the topic. Other than being flagged as coming from the conjector 70 , output of conjector 70 is essentially identical to data inferred from sentences by semantic analyzer 50 .
  • the conjector 70 behaves in a similar manner to the analyzer 30 , except that it only looks at material in the present context pool 10 . It is not bound by the same needs for hard facts as the analyzer 30 is, and effectively offers subjective information for consideration. Its proposals are largely ignored by the analyzer, except for cases such as the following:
  • the analyzer turns to the conjector 70 to fill in the blanks.
  • conjector 70 reviews outstanding questions or issues, as defined both in the context pool, supporting tables and appropriate state parameters 22 .
  • Some state parameters 22 rack the present topical subject(s), questions being asked, and information presently being sought by analyzer 30 . On the basis of this material, it scans even low-firing neuron references and commands within the context pool 10 and proposes (conjectures) answers for the analyzer 30 .
  • Respect by analyzer 30 for conjecture is implied by the weighting placed on it. Proposals are ignored if they conflict with other information, or if better (stronger firing) information becomes available. Conjectures age rapidly and are soon forgotten from the context pool 10 , whether or not acted upon.
  • the analyzer 30 considers the source of the conjector 70 's ‘information’ and its levels of confidence (firing levels). It then establishes its own need for the proposal, and its own level of confidence in the data. Rejected conjecture is immediately deleted.
  • conjector 70 can be implemented as hard-coded logic, as a micro-coded processor, a software emulation, an embedded processor, FPGA, ASIC, optical or other technology of choice without altering the means of this disclosure.
  • Dreamer 75 functions as the ‘right side’ in the brain emulation of this disclosure. It peruses neuron references in context pool 10 and uses different weightings for state parameters 22 than used by analyzer 30 for its inputs and decision processes.
  • the dreamer 75 influences the analyzer 30 primarily by injecting fired neuron references into the context pool 10 , rather than just structured commands such as from the semantic analyzer 50 . Where pre-existing information in the context pool 10 comes from visual or aural sources 60 , or from visual neuron correlations, the dreamer 75 may output proposals in the form of command blocks.
  • the dreamer 75 Similarly to correlator-analyzer 30 's processing methods, the dreamer 75 generates new references and commands based upon existing neuron firings. However, when traversing the neuron relational chains, lower regard is given to relational conditions 1252 , as in FIG. 9 . The resulting outputs are of low reliability, as indicated by both their source and its firing levels.
  • the dreamer 75 may indirectly alter the subject topics by issuing events to event queue 14 . Due to the ‘noise’ levels involved, the dreamer 75 may rapidly flit from topic to topic. The dreamer 75 also remains active when the brain emulation is otherwise in a ‘sleep’ mode.
  • analyzer 30 When subsequently processing context-pool 10 data created by the dreamer 75 , analyzer 30 does not create new neurons or relationals in the reinforcement memory 11 . Upon awakening from sleep mode, the analyzer 30 also rapidly purges residual dreamer-generated ‘information’ remaining in the context pool 10 .
  • the dreamer 75 therefore behaves as a ‘movie-maker’ of sorts, unconstrained by relational logic. It creates new ideas loosely based on the context of the moment, ideas that also have very rapid lifetime decays. While this firing of neurons is not in a logical or cohesive way, it still influences decisions and analyses made by the analyzer 30 .
  • Dreamer 75 is algorithmically based, statistically ignoring strong-firing neurons and applying logarithmic weighting to firing neurons as a part of its own processes. In this way, dreamer 75 peruses the context pool 10 , effectively giving weight to neurons barely firing.
  • the impact of the additional neuron firings in context pool 10 is that the dreamer places greater overall weight on neurons than the analyzer 30 would have. During the course of activity, the firing of some neurons will be enhanced because of the multiple references to those neurons. Analyzer 30 appropriately weights information flagged as coming from the dreamer 75 , and continues to apply its normal logic to the data. Where it is seeking new ideas, it will weight dreamer-induced references higher than it ordinarily would.
  • dreamer 75 operates at lower effective thresholds than useful for analyzer 30 , it is more prone to ‘noise’ and error than is the analyzer 30 . While its outputs are less reliable insofar as decisions go, its purpose is different.
  • dreamer pseudo-information passes through clutter filter 40 where it may be rejected by the personality and temperament filters.
  • the clutter filter rejects more dreamer 75 output by altering rejection filter thresholds.
  • dreamer 75 can be implemented as hard-coded logic, as a micro-coded processor, a software emulation, an embedded processor, FPGA, ASIC, optical or other technology of choice, without altering the means of this disclosure.
  • Speech and Visual Analyzers 60 The emulated brain of the present disclosure may be applied to a mechanical system, whether a skeleton or vehicle, and list-based motor skill learning functions are used. Interfaces from task list handler 13 , event handler 14 or analyzer/correlator 30 can be used to control external hardware. These interfaces can be used to apply specific levels of force, when used with closed-loop feedback, or a specific mechanical position, with or without feedback.
  • Sensors used for the feedback systems are determined by the application. For example, placing one's hand on a table requires either a' priori knowledge of the table height and position, or requires feedback such as derived from the eyes. Suitable sensors might be a pressure sensor for the nose (so one doesn't bump into a wall more than once) or for the hand. Aural sensors provide feedback to ascertain the proper formation of sounds, such as to sing on key with existing music.
  • the methods of this disclosure create correlation templates or proposals, visual or aural objects presented for correlation against visual images or sounds.
  • Binary search methods are used to select the proper template for correlation, to rapidly determine degrees of recognition.
  • the correlation method constitutes a processed sensor, a sensor with internal ability to ascertain degrees of recognition.
  • Non-processed sensors are simple temperature, pressure, humidity or light intensity measurement devices, whose outputs are simply formatted appropriately for input to an interface. Processed sensors require interpretation and possible correlation before they can develop meaningful signals. For example, using any number of algorithms, a visual sensor takes a template image and returns the degree of correlation in the present image. Similarly, processed aural sensors take a prototype, such as for a phoneme, and return the present degree of correlation. Phoneme variations may be proposed if a matching word has its neuron firing in context pool 10 .
  • Speech and visual analyzers 60 use task list or other memory such as 13 to retrieve the next sequential image templates for correlation as proposed by analyzer 30 . These are conveyed as present settings of the relevant state parameters 22 . For example, some motor skills demand visual feedback for the recognition of a table, its upper surface position, and the position of that portion of the hand to be placed there. These separate objects that must be recognized in turn by the visual correlation processes.
  • the table top When the table top has been identified, its position must be reported to the context pool 10 , as is the position of a suitable landing site on it, the proper area prescribed by the analyzer 30 's intention and desire.
  • the outputs of visual correlation are conveniently made relative to the location of the skeleton's eyes, such that correction for hand motion can be made.
  • motor skills require feedback for position, rate of travel, distance and the like. From a single sensor (e.g., a pair of camera ‘eyes’), multiple streams of feedback can be derived, with the information forwarded as command or event packets to context pool 10 .
  • a single sensor e.g., a pair of camera ‘eyes’
  • multiple streams of feedback can be derived, with the information forwarded as command or event packets to context pool 10 .
  • Visual and aural cues aid in confirmation of recognition, delivering feedback for required motion control. These are needed, for example, to rotate and tilt the head properly and to then direct the eye yaw and tilt so the detailed center of the foviated vision is centered on the portion of the scene of interest. These matters are handled interdependently by list processor 13 and visual/aural analyzer 60 .
  • the speech analyzer 60 dumps its output into the semantic analyzer 50 to actually parse spoken material into items suitable for the context pool 10 memory.
  • visual analyzer 60 itself can be implemented as hard-coded logic, as a micro-coded processor, a software emulation, an embedded processor, FPGA, ASIC, optical or other technology of choice, without altering the means of this disclosure.
  • Garbage Cleanup and Collection refers to the reclaiming of unused fragments of memory. During this process, the fragments are sought out and objects in surrounding memory are moved up or down, coalescing unused fragments into a larger block. Coalesced blocks are remembered for later reuse.
  • Cleanup is a catch-all phrase to cover all things that need to be done to the memory to optimize it. As noted below, it is used to resize certain areas of memory to optimize usage, reclaiming previously reserved space that could better be used elsewhere.
  • Memory garbage collection and cleanup processes usually involve the movement of information in memory, with suitable updates to indices and pointers to properly reflect the movement.
  • the inactive state of the brain emulator can be used to advantage to handle movement of validated facts from reinforcement to long-term memory 12 . This process leaves unused holes in reinforcement memory 11 , which are also cleaned up.
  • reinforcement memory 11 is scanned to determine what neurons are eligible for transfer. If transfer would be impeded by lack of space, the associated long-term neuron memory record 1251 is resized upwards.
  • neuron space can also be resized downwards during ‘sleep’ times, to optimize it. Neurons 1250 with significant free space behind them can have some of that space reclaimed. Heuristics determine whether or not to downsize. Sparse separation of neurons 1250 in memory is always faster, so reclamation is only done if required.
  • Incoming information 93 The implementation of deference between two modeled individuals takes place in analyzer 30 .
  • the position of the present individual being modeled within a hierarchy of individual, political or institutional structures is also kept in parameters 22 .
  • Clutter filter 40 uses personality-specific parameters 22 to determine whether the composite personality is even interested in addressing the information, which has been pre-classified. For example, a Choleric temperament is likely to completely ignore human-interest information, whereas a Sanguine temperament readily devours it.
  • the filter 40 is a catch-all area to pass preliminary judgment on data, including judgment of its source.
  • the filter 40 is controlled by a number of dynamically-changing parameters, including the current state of patience. When context pool 10 is full, filter 40 drops information, bio-mimetic to someone in the state of “mental overload.”
  • the brain emulation of this disclosure learns over time, influenced by underlying temperament. Normal human learning processes are used by the emulated brain. None is retained in permanent memory 12 by the analyzer 30 unless it has been reinforced for approximately 21 days, avoiding an accumulation of ‘clutter’ facts and relationships. Facts learned are normally interpreted under the influence of the root temperament, which has its implicit filters and analytical processes (or limited analytical processes, as in the case of the Sanguine).
  • the brain emulation may be ‘trained’ by a method preempting normal temperament-and-time processes, to rapidly absorb facts, control and environmental conditions. The process is therefore described here as preemptive training. It is assumed in this case that the ‘facts’ and relationships presented are previously determined to be true and factual, “from God,” as it were.
  • Preemptive training may be turned on or off at will, externally to the emulator. It can be turned on to affect rapid training of these pristine facts and relationships, bypassing temperament-related decision steps and levels of analyzer 30 and clutter filter 40 .
  • this training mode access is given to state parameters 22 and controls not otherwise permitted. When training is completed, these may be returned on. The modified parameters then immediately effect the personality.
  • preemptive training When in preemptive training (‘setup’) mode, the entire contents of memories, one or all, are selected or all state parameters 22 may be copied to external storage. This has application for both the commercial marketing of the information as “intellectual property”, and for military purposes as discussed elsewhere. Such ‘snapshot of being’ may be replicated elsewhere and used as the basis for additional training.
  • the beliefs are set by a prose-style description in a text file, to be read by the brain emulation. If it does not understand something or considers something illogical, it will ask for clarification by the trainer. The prose can subsequently be altered to preclude that question for the future.
  • control Parameter Values The many control parameters 22 and their default values may also be preset by preemptive training. This can also include specific emotional responses to be evoked when defined conditions are met. The result is again that the brain emulation does not know why (he) responds that way, but he simply does. This is useful to preset human-like likes and dislikes for specific things, for accurate emulation of a person.
  • Preemptive training is the method by which the temperament of the brain emulation is specified, including both the base temperament type and the upper-level composite of temperaments. These settings will directly affect the outcome of responses and decisions made by this emulation.
  • the time frame over which the brain emulation learning reinforcement occurs is nominally 21 days, but defaults to somewhat different durations on a temperament-dependent basis.
  • Table 9 gives some representative default reinforcement intervals. ‘Permanent’ learning also takes place during times of emotional stress or trauma, during which the interval of this table is proportionately decreased. TABLE 9 Temperamental Learning-Reinforcement Intervals Temperament Duration Choleric 21 days Sanguine 18 days Phlegmatic 15 days Melancholy 21 days
  • the brain emulation When the time is reduced (it does not effect preemptive training), the brain emulation is more likely to retain trivia and insignificant information. After the emulation is turned operational, those presets become an intrinsic part of its responses. They define the settings from the present time onward, until altered.
  • memories 11 , 12 , and 13 and other tables may be saved to external storage, upon command.
  • One skilled in the art will realize that the methods of saving memory and parameter states are dependent upon the technology of implementation, and that variations in these methods do not materially alter the system of the present disclosure.
  • the emulation's memory and parameter settings can be “snap-shotted” to enable a simulation re-run under new conditions or parameter settings. Anything learned between the snapshot and the time of their later reloading is lost and may not be incrementally recovered and reapplied, unless it was also snap-shotted.
  • Degreed Deference A concept that plays a necessary role in human relationships is that of deference to another person. Deference is not ‘black-and-white’, but exists by degree. Normally the human makes decisions that suit himself under the present conditions, without regard to other people. However, he/she will have particular regard (deference) to some people, such as parents, bosses, military chain of command and the like. The brain emulator uses degreed deference to emulate this implied relationship. Referring to FIG. 13 , the Present-Need-to-Defer parameter 229 provides the weighting.
  • deference tables 128 may be created in memory 12 , that apply in a specific context 1283 (e.g., military, political, social order, class). All deference tables are chained together using the links such 1284 and 1285 .
  • the analyzer 30 scans the deference tables to alter a tentative decision, if it conflicts with an external command, such as inferred from an imperative sentence in semantic analyzer 50 .
  • Analyzer 30 seeks a deference table matching one or more active contexts of the moment, as maintained in state parameters 22 . Finding one, it specifies the parameter for the rank self-identity. If the subject being measured for deference is another person, that person's ID 200 is used instead.
  • the relational comparator 1280 makes its decision as the deference output 1282 .
  • the decision weighting 1296 is further adjusted by the present need to defer 229 . Signal 1296 is then used to determine if any decision should be made at all. In this manner, the analyzer 30 defers to commands of authority it is subject to, or weights the decision outcome if the conflicting command was merely a recommendation of external authority.
  • the deference tables 128 therefore supply a realistic influence by external authority upon the brain emulation.
  • a simulation manager in charge of the brain emulator(s) can exert real-time control upon the brain emulations, if the manager's ID is placed at the top of all deference tables.
  • Preemptive training establishes the set(s) of hierarchical tables 128 for relationships between this emulator and others (or other people).
  • the same prose-style description is used to describe the ‘chain of command’ and where the current brain emulation fits within it.
  • a down-line deference i.e., a condition where another emulator or person should defer to this brain emulation
  • It sets the emulator's expectations of that other emulator or person. Response to a violation of those expectations is dependent upon the base temperament specified for the present brain emulator, and may also be defined during preemptive training.
  • Temperament The Implementation of Temperament. Certain assumptions made by any such model of human psychological function, including this one, enable or simplify the understanding of brain functions. Properly done, they permit ready creation and implementation of a synthetic brain based on that model. They may be right, wrong or erroneous, but such assumptions permit rapid creation of a ‘baseline’ implementation. Such assumptions do not effect the overall means of this disclosure.
  • FIG. 14 depicts one such assumption, the makeup of composite personality.
  • the assumption is made that each person is ‘pre-wired’ at birth with a specific set of pre-dispositions, one of four basic types well known to those skilled in the state of the art. These include the Choleric, Melancholy, Sanguine and Phlegmatic temperaments, as categorized and defined among the basic tenants of classical psychology.
  • FIG. 15 depicts another assumption used by the present disclosure and model, approximate traits exhibited by the four classical temperaments.
  • the above ‘pre-wired temperament’ 201 of FIG. 2 are replaced by the actual classical temperament names, in FIG. 15 and FIGS. 16 A-D.
  • FIG. 15 illustrates typical traits (largely, but not fully) specific to one temperament type, as indicated above each temperament.
  • FIGS. 16 A-D represent the composite personalities of people, each based upon one of the four underlying predisposition temperaments.
  • a Melancholy of FIG. 16 b may embrace decisiveness or leadership traits more characteristic of a Choleric.
  • FIG. 17 depicts the Choleric parameter 202 in its relationship to the Propensity-to-Decide parameter 222 , noted earlier.
  • the actual value of parameter 222 is the sum-of-products 2421 of the current values of all four temperament-controlling parameters, each with its own weight.
  • the values of the weights 2420 applied are selected and fixed in the emulation, but the controlling temperament parameters may themselves be adjusted as desired.
  • ‘pseudo-neuron’ temperament parameters may be treated as normal neurons in a neural network.
  • a useful assumption made by this disclosure is that human beings (being emulated) have a root, or base, temperament at birth that gives the human certain propensities for behavior.
  • a root, or base, temperament at birth that gives the human certain propensities for behavior.
  • training and growth may cause the human to take on selective traits found predominately in one or more of the non-baseline (‘pre-wired’) temperament.
  • Trauma A part of this disclosure is the implementation of the human response to emotional pressure or to physical or emotional trauma. Such response is modeled here, for example, as the reduction of impact of such experience, training and growth, such that the personality temporarily is dominated by the ‘pre-wired’ temperament. This is depicted in FIG. 18 .
  • FIG. 18 the elements of FIG. 17 are augmented by a selector 241 , which takes as its output either of its two inputs, one or the other in its entirety, or a percentage of each input as selected by a determining control input.
  • a selector 241 takes as its output either of its two inputs, one or the other in its entirety, or a percentage of each input as selected by a determining control input.
  • the normal operation and description depicted by FIG. 4 is altered under emotional or physical trauma or extreme pressure, as noted by parameter 230 .
  • selector 241 is interposed between temperament sum 2421 and the Propensity to Decide parameter 222 , such that when under trauma, that decision behavior is instead determined by the ‘pre-wired’ root temperament 201 .
  • the base temperament is pre-chosen as one of the operational set-up values for the brain emulation and is presumably unchanged for ‘life’, although nothing prevents such change.
  • Trauma parameter 230 is triggered and set by sensing other parameter or neuron conditions that indicate levels of extreme emotional pressure or trauma, or physical trauma or shock, for example, trauma 230 is configured to automatically decay with time, using a linear, logarithmic rate or other rate to its nominal ‘off’ (unperturbed) state or value. It is normally triggered by a change of the above conditions and can be re-triggered if the condition is sustained or recurs, and can be designed to decay immediately if the condition is removed.
  • Trauma parameter 230 The conditions triggering Trauma parameter 230 are not depicted in FIG. 18 , but are presumed to exist, and consist of a sum-of-products of parameters and brain nodes from whose values the trauma can be sensed.
  • FIG. 18 Handling of Gender.
  • processing flow is augmented with additional multiplexor and weighting tables such as 241 and 242 . These would be driven by the Gender parameter 209 , instead of Trauma 230 , for example. Where appropriate in the decision and thought processes, these additions are incorporated to account for gender-related processing differences.
  • Brain emulator 311 as described previously can be configured to receive ‘verbal’ input in the form of a text stream 93 and to emit conversational output text 98 .
  • the brain emulation 3110 can be network-connected to a local or remote network 312 . It becomes a network-connected brain emulation 311 . It should be evident to one skilled in the art that many variations of interface 3112 are possible without changing the system of the present disclosure.
  • a simulation team 315 of human operators can be assigned to upload intelligence to emulators 311 to accurate emulate key individuals in the modeled battleforce. As new information becomes available on the modeled combatants, preemptive training can be used to update the models.
  • the emulations 311 used in the simulation cluster can use the port concept of the TCP/IP protocol to restrict conversations among themselves. Such specific local-communications ports can be precluded from access by other such clusters via conventional internet gateway 313 .
  • Cluster 310 can then be used to emulate an enemy combatant force (e.g., a ‘Red’ force), an unknown combatant force, coalition or friendly (e.g., ‘White’ or ‘Blue’) forces, secure from each other.
  • an enemy combatant force e.g., a ‘Red’ force
  • an unknown combatant force e.g., coalition or friendly forces
  • Multiple clusters 310 may be interconnected to form an integrated battleforce simulation system 31 as shown in FIG. 21 . Simulations would be under the overall direction of a simulation director 330 .
  • the director 330 can have secure access to internal conversations within the battleforce clusters 310 by mans of a dedicated encrypted port that gateway 313 replicates and encrypts the local busses 320 . This configuration permits independent simulation teams 315 to work independently of each other but under the scenario proposals and directions of the director 330 .
  • the simulation director 330 can remotely take snapshots of the memory and brain parameters of all brain emulations in the system 31 . By taking such periodic snapshots, the simulations can be ‘rewound’ and restarted with different scenarios, intelligence information or updated personality profiles.
  • Simulation teams 315 may preferably consist of psychologists and people with knowledge about the personalities, governments or composite forces they are responsible for emulating. This disclosure permits realistic inclusion of religious belief, moral convictions (or lack of them), chains of command and authority, and other relevant personal information required for accurate predictive modeling of people systems.
  • the simulation system 31 may be located in a local region or may be distributed across the world. Results of such simulations can be made available to the actual warfighters as a part of C4ISR.
  • BNF Baccus-Nauer Format
  • the parser itself is created in a top-down description of the language, and the description (a ”.BNF” file) is then translated by the Lingua compiler into a C++ class that serves as a parser. At run-time, that class parses the sentence in accordance with the language definition in the BNF file. Incoming sentences are parsed according to that definition, and the constituent parts are pushed onto a stack.
  • the BNF is written in top-down fashion, such that a sentence is defined as a Subject and a Predicate, while a Subject is a Noun Phrase, which itself is an optional ‘a/an’ determiner, a set of optional adjectives and a noun-equivalent. This process progressively defines sentence parts in more detail, and includes all realistic variations that a sentence may have.
  • a ‘definitive’ sentence defines something.
  • the brain supposedly remembers the definition of a word, and possibly makes associations or relationships with it.
  • definition of a word or topic may begin with a definitive sentence, but the definition is elaborated with declarative commentary afterwards.
  • the parser should be as streamlined and fast as practical. Currently the majority of the process load is caused by efforts to differentiate between definitive and declarative statements. A lot of recursion occurs as one pattern match is attempted, fails, and another is tried. Additionally, other sentence types calling on these same patterns have to go through this extra recursion as well.
  • the brain's following parsing system could be used to assist post-parsing:
  • Post parsing can more readily look forward within the stack to help determine a DEFN versus DECL, because we are not restricted to any cases or sub-patterns of the statement pattern. This system is more efficient, and in the end enables us to accurately differentiate between DEFNs and DECLs.
  • Modifiers e.g., all, some
  • determiners a, an, the in the subject and verb types (is, are) are primary elements useful to determine if a statement is definitive. Absence of a direct object is also a possible indicator of a definitive sentence.
  • the original methods devised to determine a DECL were: TABLE 10 Some Conditions for Definitives Remarks Vb Suffix Examples Pres-Simple Forms: A. ITV Dogs bark. B. IRR-PRES Dogs unwind. C. Vos Gerund dogs enjoy hunting. E. Vos Adj Dogs act strange. F. Vos Noun_Ph Dogs resemble their owners. IS-BE Forms: A. Is_Be IRR_PPART Houses are built/broken. B. Is_Be IRR_PRES ers, ed Dogs are forgivers. C. Is_Be Noun_Ph Dogs are animals. D. Is_Be Adj_Ph Dogs are slimy. These are now replaced with the following
  • FIG. 22 there are illustrated sample relational connections.
  • the following example sentences are parsed (as definitive sentences) and are then used to permanently create neural relationships.
  • the groups of boxes, the Man and the Chase neurons, are relational connections stored with the respective neuron.
  • This example shows how two sentences on the same general topic (e.g., men), defining what certain men are like. It also demonstrates what/who is known to be capable of belching (whatever that means).
  • FIG. 22 uses some of the values from Error! Reference source not found below.
  • R_GAMUT (0-31) Any Neuron Gamut table of 1-32 entries follow.
  • R_INHIBIT ⁇ 0 . . . 100% Any Neuron % inhibition of firing target neuron, even if other relationals enable it.
  • FIG. 23 Other examples of relationships established using the relational records of Error! Reference source not found.1 is shown in FIG. 23 , particularly illustrating the R_IMPLIES (100%) case and the NOT (inhibitor) case.
  • the linkage codes may be intermixed within either the noun or verb neuron relationals.
  • negation is primarily an ‘inversion’ operation.
  • dogs are established to be animals (via Implies), and to have ‘dogpaws’ (via Possn).
  • the constraint is put on dogpaws (via Not) that inhibits ‘animals’ from turning on unless ‘dog’ is hardly firing.
  • Not complements subtracts from 100% the present recognition level of ‘dog’. If we don't think the object we're looking at is a dog, i.e., the firing level for ‘dog’ is only 20%, use of a Not then inhibits ‘animal’.
  • a ‘background job’ can be run while the brain is sleeping or otherwise not occupied. This operation can go in and remove the redundant linkage, adjusting the weights to other neurons to a suitable compromise value.
  • Gamut is a means of expressing a range of values or a group of ordered relationals in an ordered list. It includes expression of notions such as:
  • a specific concept or term e.g., color
  • set or list of terms expresses the known variants of that concept.
  • color may include violet/purple, blue, green, yellow or red in its gamut—and many other variants besides.
  • Each item of terminology in the list is then assigned a relative value (it may also be an absolute measure, such as wavelength).
  • Gamut may also be applied to certain aspects of mental state, and to emotion. For example, FIG. 25 .
  • FIG. 25 expresses directness of communication along a gamut of 0-100%, and defines certain words that imply various degrees of the state of directness. It also illustrates use of idioms (‘to the point’) in the enumeration of gamut. This will be described in more detail herein below.
  • the value of organizing information around gamut is that nuances of expression can immediately be correctly deciphered from sentences directed at the behavioral brain model. Just as useful is that it permits the rapid construction of sentences directed at another person, sentences that contain human-like nuance.
  • idioms may be freely used in enumerating gamut. Use of idioms gives enormously human-like appeal to the outbound part of the conversation. Idioms are word phrases whose meaning is culturally determined, a meaning different from that of the individual words used in the phrase. For example, “off the wall” does not imply removal of something from a wall. Rather, it currently means unusual or strange, in the positive sense of the terms.
  • Color is an adjective whose value ranges from deep violet at 390 nm wavelength to deep red at 710 nm wavelength.
  • the gamut of color includes the list ⁇ deep violet, violet, blue, green, yellow, orange, red, and deep red ⁇ .
  • an additional gamut of tints may be added to the list of colors defined. For example, orange turns to brown if the intensity is reduced, or to tan if white light color is added to it.
  • the alertness could range from a rather unconcerned level of alertness which would be basically inattentive, i.e., there was no concern about traversing the path, or it could be one of high alertness or urgency.
  • the concept of alertness could be neutral with positive and negative ranges disposed thereabout.
  • the relationship with the path would define how this is expressed by the one system. For example, the phrase could be “I am very bored about walking along this path” or it could be “I am very frightened about walking along this path.” However, the level of danger or alertness would be expressed in a percentage from a low percentage, i.e., inattentive or unconcerned to a high level, i.e., very frightened.
  • the danger level would result in the various levels of alertness being selected between the extrema.
  • An example of the gamut is set forth as follows for alertness:
  • the above noted table is associated with an alertness neuron, defined as the neuron “1036” that is defined in the system.
  • Each neuron in the system is defined by a number which is disposed in parenthesis adjacent to a particular term.
  • the alertness neuron would have a gamut table (ordered list) associated therewith which would have, for example, a rating of 91% for the term “urgent” which is associated with the neuron “1019.” This is a forward relational whereas the urgent neuron, neuron “1019,” would have a back relational back to the alertness neuron “1036.”
  • gamuts are basically tables of ordered lists in the relational portion of a particular neuron which constitutes the root neuron for the gamut and these are created during population of the system. This example will be described in more detail herein below.
  • color is the ‘owner’ or root neuron of the gamut, and the names of the colors are independent terms in their own right, but all point to color, the owner.
  • Words for a brain emulation can be stored and accessed in a variety of formats, such as linear lists, sorted tables with binary search, hash tables, and the like. The exact used is less important than is the fact that once a word is looked up, immediate access is provided to additional information about that word. That is, a lookup of a word subsequently provides fast access to information about the word.
  • the gamut's root owner must maintain a list of all constituent members of the gamut in an ordered list. For example, color must contain a list of all members of the color gamut, again as a list of identification means such as the member used.
  • Gamut may be applied to any measurable quantity, color, for example.
  • color for example.
  • the appropriate name or terminology to express that color can be quickly chosen by indexing into the list of gamut identifiers.
  • a small random number can be added to the present numerical value and the sum used as the gamut lookup index instead.
  • a convenient place to store the gamut tables in the brain model of this invention is in the relationals list that is associated with every neuron.
  • FIG. 26 A depiction of storage layout that permits this is shown in FIG. 26 , and applies to the gamut for directness illustrated in FIG. 25 . Only two of the example gamut enumerations (’oblique’ and ‘inarticulate’) are shown. The same layout scheme also applies to color and its gamut enumeration.
  • FIG. 26 shows an example ‘gamut owner’ or ‘root’ for the gamut process.
  • it represents a hypothetical mental state within the brain defined here as directness, meaning the directness with which the brain will communicate at the moment.
  • the directness is looked up by name “directness” or may be otherwise known by a'priori knowledge or design. Associated with it—through training—is a set of terms that convey the gamut of directness, ranging from completely inarticulate to very blunt. Regardless of the precise terms used to enumerate the levels of gray in between these two extremes, each enumeration contains an associated value. The examples used her use a range of 0-100%.
  • the gamut table happens to be stored in the neuron area set aside for relationals, weighted connections to other neurons.
  • the term ‘directness’ might imply the word should be given noun typage.
  • the fact that it contains a gamut list is recorded as a type-related flag bit or code.
  • the gamut table itself contains a list of identifiers that permit access to words/neurons that define a specific level of directness.
  • the present mood of directness which, for example, might have a value of 60% defined external to the neuron
  • the hardware or software implementing the brain looks for the closest entry in the table, which happens to be 70%, implying the level of directness is ‘concise’.
  • the gamut table itself contains a list of identifiers that permit access to words/neurons that define a specific level of directness.
  • the specific level of directness is recorded in the gamut table along with the identifier, and is duplicated as an internal value stored with the related enumeration word.
  • the duplication is not crucial to implementation, but can save additional search time in many cases.
  • gamut list All constituent elements of the gamut list—and gamut list owner—can be looked up in memory using the system of implementation chosen for the memory storage area. This means that no secondary lookups are necessary for the parsing or understanding of a word within a sentence. Because the gamut table already contains a list of identifiers for the constituent parts, no lookup is needed for any of them, because the index or identifier can directly imply an address or location in the memory where the data is contained.
  • FIG. 27 there is illustrated a more detailed diagram of a particular root neuron 2702 associated with a particular term.
  • This term may, in and of itself, be related to other terms not associated with a gamut.
  • These are the relationals. They are termed R 1 , R 2 and R 3 , for example.
  • R 1 , R 2 and R 3 one of the relationals for this particular neuron 2702 is a gamut, i.e., this provides information to this particular neuron 2702 that it has many shades of meaning, which particular shades of meaning are selected through some type of gamut input from an external source.
  • Each of the relationals, R 1 , R 2 and R 3 for example, have a relationship or a link with a particular neuron, neurons N 1 , N 2 and N 3 respectively.
  • the gamut relational is associated with a table 2704 which is part of the neuron.
  • This has N relationships associated therewith, which each of which gamut relationship has a percentage associated therewith from 0% on one side to 100% on the other extreme.
  • this will define the relationship to one of the neurons associated with particular gamuts.
  • For the gamuts G 1 and G 2 for example, there are provided neurons N G1 , N G2 , etc., up to neuron N G3 .
  • N G1 , N G2 , etc. up to neuron N G3 .
  • a particular neuron consider the following table for an anticipation neuron. In general, gamuts are used primarily with adverbs, adjectives and verbs.
  • the neuron anticipation is associated with two different gamut tables, one for an adjective, and one for an adverb.
  • the word type will define which of the gamut tables is utilized. In this table, there is provided about how many gamuts are associated with a particular gamut table. In this example, there are twenty-one relationals associated with each gamut table, 21 for the adjective or type N, 21 for the adverb type. Only a portion of these are illustrated.
  • the anticipation level or mood is at a level of approximately 40%.
  • the system would look at the input and then select as to the particular mood to assert in a sentence or otherwise to be the one of “forebode.” If a sentence were being constructed, this would be the word that this particular system utilizes.
  • the input that defines it i.e., the percentage input, would be one that is learned with experience. For some reason, the context of the sentence, as described above with respect to the “path” example, would define for a particular system what that percentage value would be. However, this could be different for different systems.
  • a system 2802 generates a sentence referred to as the sentence “I have a foreboding feeling about this.” This sentence is communicated to a second system 2804 .
  • the system 2802 has a gamut table 2806 associated with the anticipation neuron which has all of the terms for the adjective associated therewith and which have a percentage value from the lowest level of anticipation to the highest level of anticipation.
  • the foreboding gamut for anticipation is at a 42% level.
  • the input level of anticipation is 40%.
  • the way this would be expressed would be by using the term “foreboding.”
  • the anticipation level is 10% which means that this term “forebode” is associated with a much lower level of anticipation. Therefore, the one system 2804 would view the sentence that was constructed by the system 2802 as having a lower anticipation level than the system 2804 .
  • the language parser is the device or object that reads sentences (e.g., English) and gleans purpose and intent from their structure.
  • sentences e.g., English
  • So-called ‘natural language’ parsers operate by breaking sentences into their constituent parts in a top-down fashion. Sentences are broken into clauses, and clauses into subject and predicate. The latter are in turn broken down into phrases, special structural words, and lowest-level parts of speech, the words themselves.
  • the words are looked up in a manner appropriate to the neural model or brain system chosen.
  • many words are specialized and treated as nerve-like interfaces to the outside world. Even with these specialized behaviors, each word is looked up by the parser in the brain's memory, and the related semi-permanent neuron ID for it is returned to the parser.
  • a definition of English grammar is first defined and then fed to a software tool.
  • the software tool in turn compiles the definition of English grammar and synthesizes a complete parser for the English language.
  • Such tools exist in the general marketplace.
  • BNF Brunauer Format
  • each word represents a single neuron.
  • Idioms and multi-word phrases are likewise treated as single neurons.
  • the disclosed behavioral brain model uses a file of information formatted for easy ingestion into the brain. (There also can be provided a second specialized parser that extracts and remembers data from this file.)
  • the file primarily contains ‘structural words’, specialized words that do not change much from generation to generation. It excludes most nouns and most verbs.
  • Verbs that remain in that initial-word training list include the irregular verbs (do/did/done), various forms of to be, and other such verbs that hold special places in English grammar. Similarly, adverbs and adjectives relating to human senses are all retained there, as are all the prepositions. Essentially, any word needed to relate to ‘the world immediately around me’ is kept there.
  • This initial-word training list is a natural place to put common idioms and gamut tables. Doing so also permits ready tweaking of relationships that are not black-and-white in English. For example, one can define a gamut of ‘how much’ that includes: none, a little, some, quite a little, quite a lot, gobs and mountain of. If ‘some’ is an indicator of ‘an okay amount’, it may not be obvious that quite a little should indicate more than ‘some’ implies. Gamut—and gamuts pre-stored in the structural words table—permit one to readily tweak the relative position and orderings of such words.
  • Gamut Tables Specific Relative Values of a Set
  • gamut tables Certain tables that define these relationships (which are named gamut tables) can be used to define both the terminology and neurons that depict the nuances of value over the ‘gamut’, range of possible values.
  • adverbs e.g., adjectives, irregular verb (tenses), emotion, and tables of deference (e.g., military, political or social rank, which are non weighted ordered lists).
  • Some 250 or more separate emotions can be readily identified, some of which are mutually exclusive and some of which describe markers along a range of values (i.e., a gamut of emotions). That set of emotions has been divided into some 30+ specific emotions, each having its independent gamut for which certain values are named.
  • the value of the gamut approach is simplification of emotions into closely-related categories that the brain model can describe to an interested party. Rather than stating the percentage of emotion it feels (i.e., 0-100%, which would be silly and stilted), it can now use the conventional terminology that describes its present feeling. This also permits the use of idioms (well being or scatter-brained) to succinctly communicate nuances of emotion.
  • each root emotion can be configured to reserve 32 consecutive (preferably the first 32) relational slots to depict the name of a variant of emotion. While 32 slots is a matter of convenience, variable-length lists or other fixed-length list sizes can be used. The assignment of weight-codes or percentages for the gamut table is described herein above.
  • Such a gamut of feelings might look something like that illustrated in FIGS. 29 a and 29 b , an example of what a mental clarity emotion's mapping might look like.
  • the choice of underlying emotion name and the terms used to describe its intensity are subject to change, tweaking and additions. The examples are intended to be illustrative and not precise, and actual values used may reasonably be quite different.
  • the intensity of a given emotion could vary from 0-100%, or even ⁇ 100% to +100%. While either can be preferably used, the range of 0-100% is utilized, with 50% being a nominal emotion with “nothing happening”.
  • the following table shows example gamuts of emotion.
  • the percentage assignments happen to be loosely based on 3% increments, such that the gamut can be expressed over a range of 32 unique values. (This way, a range of 0-100% can be expressed as a value from 0-31.)
  • gamut offers a way to systematically define nuances of expression that are commonplace and in daily usage. It simplifies the organization and cross-linking of information, facts and relationships. Gamut is also a perspective and way of approaching the problem of nuance in human interaction.

Abstract

A method for emulating human cognition in electronic form is disclosed. Information is received in the form of a textual or voice input in a natural language. This is parsed into pre-determined phrases based on a stored set of language rules for the natural language. Then, the parsed phrases are determined as to whether they define aspects of an environment and, if so, then creating weighting factors to the natural language that are adaptive, the created weighting factors operable to create a weighted decision based upon the natural language. Then it is determined if the parsed phrases constitute a query and, if so, then using the weighted factors to make a decision to the query.

Description

    CROSS REFERENCE TO RELATED APPLICATIONS
  • This application claims benefit of U.S. Provisional Patent Application Ser. No. 60/811,299, filed Jun. 5, 2006, and entitled THE CONCEPT OF GAMUT AND ITS USE (Atty. Dkt. No. VISL-27,690), and is a Continuation-in-Part of pending U.S. patent application Ser. No. 11/697,721, filed Apr. 7, 2007, and entitled METHOD FOR SUBSTITUTING AN ELECTRONIC EMULATION OF THE HUMAN BRAIN INTO AN APPLICATION TO REPLACE A HUMAN (Atty. Dkt. No. VISL-28,262), which is a Continuation-in-Part of pending U.S. patent application Ser. No. 11/670,959, filed Feb. 2, 2007, and entitled METHOD FOR MOVIE ANIMATION (Atty. Dkt. No. VISL-28,177), which is a Continuation-in-Part of pending U.S. patent application Ser. No. 11/425,688, filed Jun. 21, 2006, and entitled A METHOD FOR INCLUSION OF PSYCHOLOGICAL TEMPERAMENT IN AN ELECTRONIC EMULATION OF THE HUMAN BRAIN (Atty. Dkt. No. VISL-27,693), which is a Continuation of U.S. application Ser. No. 11/154,313, filed Jun. 16, 2005, and entitled METHOD FOR INCLUSION OF PSYCHOLOGICAL TEMPERAMENT IN AN ELECTRONIC EMULATION OF THE HUMAN BRAIN, now U.S. Pat. No. 7,089,218, issued Aug. 8, 2006 which is a Continuation of abandoned U.S. application Ser. No. 11/030,452, filed Jan. 6, 2005 (Atty. Dkt. No. VISL-27,019), and entitled A METHOD FOR INCLUSION OF PSYCHOLOGICAL TEMPERAMENT IN AN ELECTRONIC EMULATION OF THE HUMAN BRAIN; which claims the benefit of U.S. Provisional Application for Patent Ser. No. 60/534,641, entitled A NEURIC BRAIN MODELING SYSTEM IN THE MILITARY ENVIRONMENT, U.S. Provisional Application for Patent Ser. No. 60/534,492, entitled METHOD FOR INCLUSION OF PSYCHOLOGICAL TEMPERAMENT IN AN ELECTRONIC EMULATION OF THE HUMAN BRAIN, U.S. Provisional Application for Patent Ser. No. 60/534,659, entitled DESIGN OF THE NEURIC BRAIN, all filed Jan. 6, 2004, now expired, U.S. Provisional Application for Patent Ser. No. 60/764,442, filed Feb. 2, 2006, and entitled USE OF THE NEURIC BRAIN MODEL IN MOVIE ANIMATION (Atty. Dkt. No. VISL-27,537); and which U.S. application Ser. No. 11/697,721, filed Apr. 7, 2007, and entitled METHOD FOR SUBSTITUTING AN ELECTRONIC EMULATION OF THE HUMAN BRAIN INTO AN APPLICATION TO REPLACE A HUMAN (Atty. Dkt. No. VISL-28,262) also claims the benefit of priority from U.S. Provisional Application Serial No. 60/790,166, filed Apr. 7, 2006, and entitled BRAIN MODEL (Atty. Dkt. No. VISL-27,620.) All of the above are incorporated herein by reference in their entirety.
  • TECHNICAL FIELD
  • The present invention pertains in general to artificial intelligence and, more particularly, to emulating a human.
  • BACKGROUND
  • In certain applications, a human is required in order to assess the conditions surrounding the operation of particular system or the execution of a certain task and to determine the progress of the task or covered conditions in the system have changed. Once an observation is made by a user, based upon that user's experience and information, there can be some type of action taken. For example, it might be that a user would make a change to the system in order to maintain the system a in a particular operating range, this being the task of that individual. Further, it might be that the user is tasked to achieve certain results with the system. In this situation, the user would take certain actions, monitor the operations, i.e., the surrounding environment, and then take additional actions if necessary or to ensure that the task are achieved. This also the case with military operations wherein multiple individuals might be involved in carrying out of military mission. In this situation, multiple individuals become each having their own expertise, would be given a certain task that, when operating in concert, would be achieved tasks of achieving certain military goal. However, each of these individuals, although having a certain amount of specific training, also has the ability to make decisions that are not directly accorded to their experience in training. For example, if an unknown factor entered into the mission, i.e., previously unknown obstacle occurs, an individual has the background to make a decision, through this decision is made based upon prior experience in different areas that the human might have.
  • Sometimes an expert system is supposed to operate a particular system to take the place of the human. These expert systems are trained based upon an expert human wherein that human is presented with certain conditions and the actions are recorded. Whenever those conditions exist with respect to the expert system, the expert system will take the same actions. However, if conditions exist that were not part of the training dataset for the expert system, the results would be questionable, as there is no basis upon which to make such a decision by the expert system.
  • SUMMARY OF THE INVENTION
  • The present invention disclosed and claimed herein comprises a method for emulating human cognition in electronic form. Information is received in the form of a textual or voice input in a natural language. This is parsed into pre-determined phrases based on a stored set of language rules for the natural language. Then, the parsed phrases are determined as to whether they define aspects of an environment and, if so, then creating weighting factors to the natural language that are adaptive, the created weighting factors operable to create a weighted decision based upon the natural language. Then it is determined if the parsed phrases constitute a query and, if so, then using the weighted factors to make a decision to the query.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • For a more complete understanding of the present invention and the advantages thereof, reference is now made to the following description taken in conjunction with the accompanying Drawings in which:
  • FIG. 1 illustrates a diagrammatic block diagram of the overall animation system;
  • FIG. 2 a and 2 b illustrate a diagrammatic view of an animation sequence;
  • FIG. 3 illustrates Influence Inclusion—An example of weighted random influence;
  • FIG. 4 illustrates Implementation of the Brain Emulation—Block diagram of brain emulation;
  • FIG. 5 illustrates Language Grammar Sample—Example of natural language grammar description;
  • FIG. 6 illustrates Example Parser Diagnostic Trace—Example trace of grammar parsing;
  • FIG. 7 illustrates Example Relationals Between Neurons;
  • FIG. 8 illustrates Organization of Neuron Tables—General organization of neuron memory lists;
  • FIG. 9 illustrates Table of Neurons—Internal organization of a neuron;
  • FIG. 10 illustrates Example Relational Record—Contents of inter-neuron relationship record;
  • FIG. 11 illustrates Event Queue and Memory—Organization of the event processor;
  • FIG. 12 illustrates Content of an Event—General internal contents of an event record;
  • FIG. 13 illustrates A Deference Table—Example table of orders of deference;
  • FIG. 14 illustrates The Layered-Temperament Personality;
  • FIG. 15 illustrates Characteristic Traits of the Temperaments;
  • FIGS. 16A-D illustrate The Four Composite Temperament Models;
  • FIG. 17 illustrates Typical Temperament—Weighting of Parameters;
  • FIG. 18 illustrates Implementation of Pressure or Trauma;
  • FIG. 19 illustrates Network-Connected Brain Emulation;
  • FIG. 20 illustrates Example Battleforce Simulation Cluster;
  • FIG. 21 illustrates Example Integrated Battleforce Simulation System;
  • FIG. 22 illustrates sample relational connections;
  • FIG. 23 illustrates implied relationals in linkages;
  • FIG. 24 illustrates the “not” relationships;
  • FIG. 25 illustrates a diagrammatic view of an application of the gamut to a particular mental state, directness;
  • FIG. 26 illustrates a diagrammatic view of a specific gamut list interfaced with different neurons and their relational links;
  • FIG. 27 illustrates a detailed view of a particular neuron and the gamut associated therewith and the interface with various other neurons;
  • FIG. 28 illustrates a diagrammatic view of an example of the interface between two systems;
  • FIGS. 29 a and 29 b illustrate diagrammatic views of two types of gamuts.
  • DETAILED DESCRIPTION
  • Referring now to FIG. 1, there is illustrated an overall diagrammatic view of the system of the present disclosure. This system is directed toward the concept of controlling an animation engine with an animation engine 101 through the use of central animation brains 102, one associated with a first character named “John” and with a second character named “Jane.” Each of these brains 102, as will be described in more detail herein below, is capable of being trained to express emotion, that emotion being translated into control signals that can be put into the animation engine 101. A communication path 104 is provided for communicating information from each of the brains 102 over to the animation engine 101. This communication path can be any type of communication path, such as a TCP/IP protocol. Of course, it is well understood that any type of communication path can be utilized. Each of the brains 102, after training, will have a character, this character being defined in a memory system 105 associated with each of the characters. In these memories is contained various experiences of the particular character and various weights. These are trained and adaptable. During the generation of the animation, the entire animation is controlled by a director who may tweak the script. The screen-writer's material that is defined in a script 106 basically instructs the particular brain or character to do a particular action or instructs an input to occur in the animation engine 101. For example, as will be described in more detail herein below, the animation engine 101 can be directed to drop a box in close proximity to a particular character. This essentially is in the “virtual world” of the particular character. This action can then be interpreted by the brain and experience is gained from that action through these various inputs. As will be described herein below, this action in the animation engine 101 can elicit various emotional responses, which emotional responses are in direct response to the environmental changes within this virtual world proximate to the character which will then cause the brain to generate outputs to express emotion, this being facilitated by the animation engine.
  • Referring now to FIGS. 2 a and 2 b, there is illustrated a very simplistic concept of this animation sequence. A character 202 is provided in the virtual world defined as a series of vertices in the x, y and z direction. The character 202 is basically a character that can be represented through various concepts, but has a positional relationship with respect to the environment. There will be a head which can rotate in all directions which will have perception points, those perception points being, for example, the eyes, the nose and the mouth and even the ears. In this example, only the concept of vision will be described. Therefore, there will be two points in the virtual space that represent the vision. These points can be rotated by rotating the head in an animation sequence such that they are oriented in the direction of an object, for example, a falling box 204. The falling box is illustrated as falling from an upper level, down to a surface, and then bouncing. As will be described herein below, the character 202 is animated to recognize the box, move its head to view the box and follow the box to the upper location to the lower location and as it bounces. Further, as will be described herein below, there will be emotion expressed as a result of seeing the box and any actions that may occur with respect to the box in the environment of the individual. FIG. 2 b illustrates a situation wherein the individual is aware of the box in the personal environment and in proximity thereto. And, after seeing the box, viewing the box as a threat. Once the box is viewed as a threat, it can be seen that the character, in the lower portion of FIG. 2 b, is placed into an animation sequence wherein the character will evade the box and move away from the box to a potentially safe area. As noted, this will be described in more detail herein below.
  • Core Brain
  • The central brain of the present disclosure distills the temperament, personality and instantaneous state of a human individual into a series of Brain Parameters. Each of these has a value varying from zero to 100 percent, and is loosely equivalent to a single neuron. These parameters collectively define the state of the person's being and specify matters of temperament and personality. Some parameters are fixed and seldom if ever change, while others change dynamically with present conditions.
  • Relationships between parameters, if any, are pre-established. The Parameters are connected with the rest of the brain model in such a manner as to alter the decision processes, decision thresholds and the implied personal interests of the underlying model they become a part of.
  • The exact list of Parameters and their definitions are not germane to the system of the present disclosure, and may include more or fewer parameters in any given implementation thereof. Numerous parameters define certain tendencies specific to certain temperaments. Some define the present emotional state, such as sense of confidence in a decision. Others are place-holders that define such things as the present topic of conversation or who the first, second or third persons in the conversation are. Yet others define physical parameters such as orientation within the environment, sense of direction, timing and the like.
  • Some brain Parameters may be loosely arranged in a hierarchical fashion, while others are not, such that altering any one parameter may affect others lower in the hierarchy. This arrangement simplifies the implementation of personality.
  • Example Parameters. Table 1 illustrates a few of several hundred such parameters by the way of example. The ‘Choleric’ parameter, 202 for example, is ‘above’ others in the hierarchy, in that altering the percentage of Choleric temperament affects the value of many other parameters. For example, it affects the Propensity to Decide 222. Each can be treated as a neuron that may be interconnected with other (non-parameter) neurons. The parameter neurons may serve in a role similar to an I/O port in a digital computer.
  • The below table is not a complete set of parameters, but is a representative set of parameters useful for the explanations that follow.
    TABLE 1
    General Examples of Brain Parameters
    Parameter Description
    201 Root Temperament Choleric, Melancholy, Sanguine or Phlegmatic
    202 Choleric, Ratiometric Percentage contribution of Choleric attributes
    203 Melancholy, Ratiometric Percentage contribution of Melancholy attributes
    204 Sanguine, Ratiometric Percentage contribution of Sanguine attributes
    204 Phlegmatic, Ratiometric Percentage Contribution of Phlegmatic attributes
    209 Gender Male or female
    Sense of Confidence (Decisions) Degree of confidence in a decision
    Sense of Confidence (Motor Skills) Degree of confidence in present motor skill
    Sense of Determination Degree of determination to continue present plan
    Sense of Dread Present sense of dread being experienced
    Sense of Enjoyment Present sense of enjoyment
    Sense of Embarrassment Present sense of embarrassment
    229 Present need to Defer Present need to defer to external person's desire
    230 Trauma State of physical or emotional trauma
    Present Goal (1 of n) Present objective(s), a list
    Long Term Goal (1 of n) Long term objective(s), a list
    Topic of conversation (1 of n) The present subject of conversation, a list
    Self Identify Recognition of identity such as target for
    communications
    Present Speaker Identity of person speaking
    Person Spoken To Identity of person being spoken to
    Present Object Identity of object/person being spoken of
    235 Correlating Facts, status True of presently correlating information
    236 Hottest Node, status Hottest-firing node in context pool, for threshold
    scaling
    237 Activity Threshold Minimum firing level for context pool memory
  • In traditional models of the human brain, facts are simplistically represented as a single neuron, each of which may ‘fire’ at some level of 0 . . . 100%. The degree of firing is construed as an indication of the present recognition of that fact. These neurons are interconnected by weighted links, based upon the relationship and experience between connected neurons.
  • Example Decision-Related State Parameters. Some of the key state parameters used in the decision process are detailed below. Some are set by personality traits, some by the context of the moment and are described elsewhere. Several have baseline values established by the Propensity to parameters.
  • Activity Threshold 237 is the minimum percentage of full-scale that a neuron must fire before it is considered a candidate for inclusion in short-term memory.
  • Base Decision Threshold 250 is a personality-based starting basis for the decision threshold. Long-term training and learning experience can raise or lower the base value.
  • Correlating Facts 235 is true if the correlator portion of the analyzer is presently correlating facts, usually in support of an analyzer decision.
  • Hottest Node 236 points to the hottest-firing neuron in the context pool (short-term memory). The analyzer uses it for scaling decision thresholds.
  • Importance for Action 215 is the relative importance of making a decision. It is initially based on the propensity for importance of action, and can be scaled up down by the analyzer as the result of recent decisions.
  • Need for Completeness 260 indicates the relative need for complete (and quality) facts, prior to making a decision. Incomplete facts will cause the Conjector to make suitable guesses, but the resulting ‘facts’ will be of lower quality.
  • Urgency for Action 216 represents the urgency (not the importance) of making a decision. Higher levels of urgency make lower quality of information (and decisions) acceptable.
  • Example Temperament-Based Propensity Parameters. A typical set of basic brain Parameters which indicate various propensities based upon temperament are given in Table 2, including representative contribution ratios (given as a percentage). This set of values is by no means complete and is given for the sake of description of the mechanisms of this disclosure. Other Temperament Parameters may be identified and included in this list, without altering the methods and claims of this patent.
  • The specific percentages given in Table 2 are representative and typical values used, but are subject to ‘tweaking’ to improve the accuracy of the psychological model. Other values may be used in the actual implementation. Further, the list is representative and is not complete, but serves to demonstrate the system of the present disclosure.
  • It has been observed (and incorporated into Table 2) that, generally, many of these parameters reflect traits shared primarily by two of the temperaments, with one of the two being greater. That same parameter may also be shared minimally by the remaining two temperaments.
    TABLE 2
    Examples of Temperament Parameters
    Parameter Choleric Melancholy Sanguine Phlegmatic
    210 Propensity for Amusement 10 35 35 20
    211 Propensity for Completeness 20 35 10 35
    212 Propensity for Determination 35 20 10 35
    213 Propensity for Enjoyment 10 25 40 25
    214 Propensity for Fun 10 20 55 15
    215 Propensity for Importance of 50 10 35 5
    Action
    216 Propensity for Urgency of 35 12 50 3
    Action
    217 Propensity for Patience 15 35 5 45
    218 Propensity for Rhythm 10 15 60 15
    Influence
    219 Propensity for Stability 10 25 5 60
    220 Propensity to Analyze 10 60 5 25
    221 Propensity to Care-Take 5 10 30 55
    222 Propensity to Decide Quickly 50 15 30 5
    223 Propensity to Follow a Plan 10 25 5 60
    224 Propensity to Plan 50 35 10 5
    225 Propensity to Procrastinate 5 15 30 50
    226 Propensity to Second-Guess 5 60 10 25
    227 Propensity for Stability of 10 25 5 60
    Action
    228 Propensity to Rest Hands on 25 60 5 10
    Hips or in Pockets
  • The system of the present disclosure presumes the use of a node that defines the desired underlying temperament, and additional nodes that define the desired percentages of the four temperaments. Table 2 is a chart of the selected typical tendencies for each of the temperaments, with each numeric value giving the approximate likelihood of the given trait to be demonstrated by the four temperaments, as a percentage.
  • The percentages given are by way of example, although they may approximate realistic values. The altering of these values by no means alters the means and methods of this patent, and they may be adjusted to better approximate temperament traits. The list is by no means complete and is given as a set of representative parameters for sake of example.
  • In many, but not all, cases, the overall impact of a temperament is given by the product of the temperament's percentage, as pre-selected to produce the desired personality, and the percentage of likelihood given for each propensity from Table 2. This is demonstrated in FIGS. 4 and 5. These may be augmented by additional variations due to the Gender 201 parameter, accounting for differences in response by male or female gender.
  • Detail of Some Temperament-Based Propensity Parameters. The samplings of parameters in Table 2 are described below, by way of example of how such parameters are specified and applied. The described settings and applications of these parameters are necessarily subjective, and the relative weightings of these and all other parameters described in this document are approximate and exemplary. One skilled in the art will realize that they may be altered or adjusted without altering the means of the system of the present disclosure.
  • The Propensity for Amusement 210 is the tendency to be amused. The higher values lower the threshold of what is found to be amusing, triggering amusement sooner. The triggering of amusement may be reflected in the appropriate facial expressions, as provided for in the underlying brain model and skeletal mechanics, if any.
  • The Propensity for Completeness 211 is a measure of the personality's tendency to need complete facts before making a decision, and is based solely on temperament selection. It is naturally highest for the Melancholy and naturally lowest for the Sanguine or Choleric. While it is normally not altered, the underlying brain model (analyzer) can raise or lower this parameter based upon training or learning.
  • The Propensity for Determination 212 is the tendency for the brain emulation to be determined, and sets the baseline value for the sense of determination. Over time, it can be permanently altered by achievement (or failure to achieve) targets or goals.
  • The Propensity for Patience 217 is a measure of the overall tendency for patience. The level is normally high for a Phlegmatic and low for a Sanguine, but is also significantly affected by (long term) experience history. Growth in this trait parameter is very slow and is an iterative process. High levels of Patience 217 can suppress early termination of action, when faced with repeated failure to meet short- or long-term goals.
  • The Propensity for Fun 214 defines the tendency of the temperament to make decisions based on the sense of feel-good. It is temperament dependent, tends to be highest for the Sanguine, and heavily influences the impact of Rhythm Influence.
  • The Propensity for Importance of Action 215 is a measure of the temperament's tendency to find action important, whether or not all the facts needed for decision are available and with high confidence. It is naturally highest for the Choleric and naturally lowest for the Melancholy and Phlegmatic. While it is normally not altered, the underlying brain emulation can raise or lower this parameter based upon training or learning.
  • The Propensity for Urgency of Action 216 is a measure of the personality's tendency to find action important, at the expense of strong consideration or analysis of the facts. It is naturally highest for the Sanguine and naturally lowest for the Phlegmatic. While it is normally not altered, the underlying brain emulation can raise or lower this parameter based upon training or learning.
  • The Propensity for Patience 217 is a measure of the overall tendency for patience. The level is normally high for a Phlegmatic and low for a Sanguine, but is also significantly affected by (long term) experience history. Growth is in this trait parameter is very slow and is an iterative process. High levels of Patience 217 cause suppress early termination of action, when faced with repeated failure to meet short- or long-term goals.
  • The Propensity for Rhythm Influence 28 is a temperament-dependent parameter, and may be altered up- or downward by hyperactivity. It controls the relative effect of rhythm on the decision process. Its baseline value is relatively higher for the Sanguine.
  • The Propensity for Stability 219 is a temperament-dependent parameter that defines the tendency towards stability. When the value is high, decisions will tend to be made that lead to no net change, in the sense of foot-dragging. It also implies a tendency to procrastinate, and is a strength (or weakness) of the Phlegmatic personality. High levels of Stability 219 lead to strong loyalty towards the context-dependent authority.
  • The Propensity to Analyze 220 is determined by temperament and is not affected by other properties, except by external command. Even then, its effect is short term and rapidly trends back to the base tendency. When very high, there is a marked tendency to analyze and correlate facts before making decisions, and the confidence-based decision thresholds based on the outcome are normally raised.
  • The Propensity to Care-Take 221 is a temperament-dependent parameter, tending highest in the Phlegmatic and Sanguine. It increases the interest in acquiring people-related facts for short-term memory. The impact of this parameter is established, for example, by altering the parameters of the Clutter Filter for the context pool or short term memory.
  • The Propensity to Decide 222 is a parameter that is highest for the Choleric and Sanguine temperaments, and influences (increases) the willingness to make decisions with a minimum of facts. For the Choleric, decisions subsequently proven inferior may be altered, while for the Sanguine, the results tend to be ignored. Parameter 222 also increases the tendency to revise decisions as higher-quality facts are available, and decreases the stability in decisions and the tendency to foot-drag.
  • The Propensity to Follow the Plan 223 is the (current) level of tendency to follow a plan. Its core value comes from personality traits, but is altered by such variables as stress, urgency, and external pressure. When pressure is high, as per Trauma parameter 230, there is increased tendency to ignore the plan and to revert to personality profile-based responses. This is accomplished in a manner such as demonstrated, for example, in FIG. 5
  • The Propensity to Plan 224 is a measure of the tendency and desire to work out a plan prior to a project or task, and is a function of the temperament profile. If Propensity 34 is high, work on the task will be suspended until a plan of steps in the task is worked out. The propensity to plan does not imply a propensity to follow the plan, per 223.
  • The Propensity to Procrastinate 225 is a measure of the tendency to procrastinate, deferring decisions and action. The primary value derives from the temperament per Table 2, and is then a fixed parameter but which may be gradually altered by experience or training. While procrastination is largely a characteristic of the Phlegmatic, it also occurs in the Melancholy decision-making process, in the absence of complete facts, and is normally very low for the Choleric.
  • The Propensity to Second-Guess 226 is a measure of the tendency to reevaluate decisions, even quality decisions, and possibly to evaluate them yet again. Temperament-dependent as shown in Table 2, it is highest in the Melancholy and typically lowest in the Choleric.
  • The Propensity to Stability of Action 227 is a measure of the tendency to maintain the status quo. Largely a Phlegmatic trait, it influences (increases) the tendency to foot-drag, and is implemented by a decreased willingness to alter plans. It may be connected to the underlying brain emulation or model as a part of the clutter or interest filter at the input of the context pool, short term memory or analyzer, suppressing new plans or suggestions that abort existing or active plans.
  • Propensity to Rest Hands on Hips 228 is a largely Melancholy trait whose more positive values increases the tendency of any attached mechanical skeleton to find a resting place for its hands, primarily on the hips or in the pockets. This parameter provides a control value to the underlying brain emulation or model, which itself is responsible for the motor skill issues that carry out this tendency. That emulation or model actually determines whether or not this tendency is carried out.
  • Again, parameters in Table 2 are directly controlled by one or more of the four underlying temperament selection parameters. They are scaled by percentages such as those also given by example in Table 2. They are then distributed by the brain model to the appropriate control points, filters and selectors within the underlying brain emulation or model.
  • Inclusion of Parameter Influence. Throughout the brain emulation, there are many places at which a parameter may or may-not influence the outcome of a decision. The likelihood of the parameter contributing to the decision in some cases are often statistically based. One method of accomplishing this is shown in FIG. 3. A random number between 0 and 100% is generated by 421 and is compared by 422 against the parameter in question. If the parameter value exceeds the sum of a base threshold parameter 423 and a random number, inclusion is enabled.
  • This type of logic is frequently used in the clutter filter discussed elsewhere.
  • Derived Brain Parameters. Many parameters derive from the basic Temperament Parameters of Table 2. These values may be a combination of temperament parameters, but as adjusted for learning, training, experience and present conditions. As with other brain nodes and parameters, most of these are expressed in a range of 0 . . . 100%, in units suitable to the technology of implementation.
  • A typical set of these derived parameters is given in Table 3. Each of these has an additional (signed) value to be added to it which is further adjusted on the basis of learning or training. The list is by no means complete, and is given for the sake of description of the mechanisms of this disclosure. Many of these relate to matters of emotion, its measure and expression. These, as may all parameters, may be monitored externally to measure the emotional state of the emulated brain.
    TABLE 3
    Examples of Derived Brain Parameters
    Decay Targets
    Derived Parameter Choleric Melancholy Sanguine Phlegmatic
    250 Base Decision Threshold 10 45 5 40
    251 Concentration Ability 10 60 5 25
    252 Docility 5 25 10 60
    253 Hyperactivity 25 10 60 5
    255 Filter Organizational Detail 5 25 10 60
    256 Filter People Interest 60 25 5 10
    258 Filter Relational Detail 10 60 5 25
    259 Filter Technical Detail 45 5 40 10
    260 Need for Completeness 10 40 5 45
    261 Patience With Detail 5 60 10 25
    262 Procrastination Level 5 25 10 60
  • These parameters may be derived from temperament, context, environmental and current-condition parameters, for example, although other means will become obvious during this discussion. The parameters of Table 3 are exemplary. Most parameters in this table decay over time to the values shown at the right. These decay targets are nominal and may be altered through preemptive training. They derive from temperament percentages in a similar manner to Table 2. The list is by no means exhaustive or complete, and others will also become obvious during this discussion.
  • The current derived parameter values are distributed to the appropriate control points, filters and selectors within the brain emulation or model. In some cases, they control decision or stability thresholds, or establish the statistical settings, such as per 42 of FIG. 3, for current-interest filters in the emulated brain, and to other such brain emulation functions. The composite impact of these temperament and temperament-derived parameters determines the composite personality of the emulated brain.
  • The Base Decision Threshold parameter 250 is the starting basis for many decisions. It is the typical starting decision threshold, and is a measure of confidence or information completeness that must be obtained before a decision will be made. The threshold is given as a percentage, 0 . . . 100%, whose application depends upon the types of decisions being made. In some places it is used as an absolute threshold, or may specify a figure of confidence in the present facts, a figure that must be exceeded before a decision may be made.
  • The Concentration Ability parameter 251 is a measure of the ability to concentrate. A more positive value raises the threshold of attention to outside distractions, those unrelated to the issues in short term (or current context) memory in the underlying brain model or emulation. It is used by both the analyzer 30 and the clutter filter 40.
  • Docility 252 is a measure of the overall propensity for stability during external emotional pressure. It contains a long-term filter that decays back to the base value. Positive Docility 252 greatly increases the threshold of attention to emotional trigger events. Docility 252 can be altered over moderate periods of time, but tends to return to its temperament-defined static value. When this value falls lower than its average setting, there is an increasing tendency to ignore learned responses and to revert to personality profile-based responses.
  • Hyperactivity 253 is a measure of current levels of hyperactivity, as would be normally defined by someone skilled in the art. It is established by a programmable value and subsequently augmented by temperament percentages. Hyperactivity is also influenced by Docility 252 and current emotional stress. These sources are the primary determiners for the base value of hyperactivity, but long-term training or experience can alter the value. Choleric and Sanguine temperaments have relatively higher values, while Melancholy and Phlegmatic values are quite low.
  • The impact of Hyperactivity 253 is implemented, for example, by introduction of (typically negative) random variations in the magnitude of selected decision thresholds. It also alters the time constants of task-step performance and present rhythm parameters, with additional ultimate impact upon the performance of motor tasks.
  • Filter Organizational Detail 255 specifies the filtering of organizational detail from incoming information, context pool or short-term memory for the brain emulation. A value below 100% removes the greatest percentage of detail.
  • Filter Human Interest 256 specifies the filtering of human-interest data from the incoming information, context pool or short-term memory in the emulated brain. 100% removes most human-interest information. The value will be highest for Choleric models and lowest for Sanguine temperaments.
  • Filter Relational Detail 258 specifies the filtering of detail about inter-relationships between facts from the incoming information, context pool or short-term memory. 100% removes most detail. The value is highest for Phlegmatic and Sanguine models and lowest for the Melancholy models. Higher levels inhibit the correlation of distant facts that are nonetheless related. Lower levels also encourage the analyzer 30 to spawn events to event memory 14. This has the effect of iteratively revisiting the same information to analyze short-term memory for better correlation of data.
  • Filter Technical Detail 259 specifies the filtering of technical detail from the incoming information, context pool or short-term memory for the brain emulation. 100% removes most detail. The value is highest for Choleric and Sanguine models, and lowest for Melancholy models.
  • The Need for Completeness parameter 260 establishes the required level of completeness of information before making a decision. A higher value of completeness increases the likelihood of deferring a decision until all the facts are available, sometimes stymieing or stalling a decision. Other parameters related to importance and urgency can alter this parameter. The need for completeness can be altered by a decision of the analyzer 30, and upon external command to the brain emulation, such as through 93.
  • As the context pool (short-term memory) shrinks over time because of rest, the need 260 drifts backwards to the value set by the propensity for completeness. The need also reverts to the propensity value after a decision has been made. 100% implies the highest need for completeness. It is highest for Melancholy and lowest for Choleric and Sanguine models.
  • Patience With Detail 261 is the present level of patience. Its baseline value derives from the propensity for patience. It is affected by present conditions and can be commanded to rise. It largely alters decision thresholds, and values near 100% imply comfort with detail. The value is dynamic and tends highest for the Melancholy and lowest for Sanguine and Choleric.
  • Procrastination Level 262 is a measure of the present level of procrastination. Its base value is set by the propensity to procrastinate, is increased by uncertainty, and decreased by impatience. Procrastination defers decisions and postpones actions that are not otherwise inhibited by circumstances. Decision choices are implemented in a manner similar to 42 of FIG. 3. Higher values of this level postpone decisions, even in the presence of hard facts (high sense of certainty).
  • While procrastination is largely a characteristic of the Phlegmatic, it also occurs in the Melancholy decision-making process in the absence of complete facts. It is normally very low for the Choleric.
  • As noted, the parameters described in the preceding tables in no way constitute a complete set of obvious ones, which total in the hundreds. Selected parameters have been presented by way of illustrating the internal processes and considerations for the brain emulation of the present disclosure.
  • Implementation of the Brain Emulation. One implementation of the underlying functional model of the brain is diagrammed in FIG. 4. Three primary elements of the model are analyzer/correlator 30, the context pool memory 10, and the English semantic analyzer 50.
  • Throughout the descriptions, English is always used where the processing of external communications are involved, whether in complete sentences or in sentence fragments. Internally, the system is essentially language independent, except where linguistics, phonics, the spelling of words or the shape of letters used in the language are involved. For ease of initial implementation, English was used, but essentially identical processes can be applied to any human language of choice. The choice of language in no way limits the disclosure for purposes of this patent. Indeed, the methods of this patent can be applied to autonomously translate one human language to another.
  • Referring to FIG. 4, various elements are controlled or modified by the state parameters previously discussed. In particular, the Clutter Filter 40 plays a central role in determining what types of information are actually considered in the brain. As are most other blocks in the figure, operation of the analyzer/correlator 30 is controlled or heavily influenced by personality state parameters 22. These same parameters may themselves by the results of analyzer 30, in many cases.
  • The flow of external information enters through the semantic analyzer 50. This distills content and intent from both English sentences and sentence fragments, and formats the distillate for inclusion into short-term memory 10.
  • Concept of the Neuron Used Here. This disclosure makes no attempt to replicate the biological neuron, axion and dendron, their arrangement or interconnections, or their redundancy. Rather, the term neuron in this patent describes the means to remember a single fact or experience. As suggested bio-mimetically, the existence of a single fact is represented simplistically by a single neuron, while the implications of that fact are contained in the arrangement of interconnects between neurons.
  • In the biological neuron, there is an in-place ‘firing’ of a neuron when the associated fact is recognized. When, for example in a fox's brain, a specific neuron represents a common rabbit, the firing of a biological neuron implies recognition of that rabbit. The degree of firing (or output) represents the degree of certainty with which the rabbit is recognized.
  • There is no such equivalent in-place firing of the neuron in the emulation or brain model of this disclosure. In a digital implementation, the entire long-term memory 12 (where facts, relationships and experiences are stored) could be composed of read-only or slow flash memory, because recognition does not involve a change of the neuron's state in that memory.
  • As an alternative process used here, recognition takes place by the existence, recognition or correlation of data within the context pool memory 10. Any reference to a ‘firing neuro’ is to be construed as placement of a reference to (address-of) that neuron into context pool 10, along with a current firing level for it.
  • Neurons and Reference Indices. Every neuron records two types of information. The existence of a specific fact is implied by the fact that a neuron to represent that was defined at all. Experiences are implied by the relationships and linkages formed between neurons. Individual neurons are emulated by some fixed-size base information, and a variable number of relational connection records, as shown in FIG. 9. Relational conditions may be conditional, predicated upon the state of other neurons, and reference the ID indices of both their target neurons and condition triggers.
  • All neurons have a unique address, but it may be changed from time to time as memory is reorganized. Further, the very existence of some neurons is tentative. They may disappear unless reinforced over a period of time, and are located in the reinforcement memory 11. Because their precise locations are unstable, references of one neuron by another could be problematic. Further, the relative size of a neuron can vary widely, depending upon the inter-relationships and context with other neurons.
  • To handle these matters gracefully, a unique and unchanging index is allocated for each neuron created. References between neurons use this permanent index to inter-reference each other. If a neuron is deleted (in reinforcement memory 11), the index is reclaimed for later reuse. A specific bit within the index value indicates whether it refers to a normal permanent neuron or to the reinforcement memory 11. A fixed subset of the indices to the reinforcement memory ‘tentative’ neurons are also reserved and used to indicate information block type and format within the context pool 10.
  • Neurons in the reinforcement memory 11 that have been reinforced over a period of time are made permanent by the analyzer/correlator 30. The analyzer then moves them to permanent memory 12 and alters all references to its index to show that it has been so moved. References within that neuron may themselves not survive the reinforcement process, and may be deleted during the transfer. Refer to Table 4 for data stored with the individual neuron.
  • Content of Neural Reference Structures. The analyzer/correlator 30 repeatedly scans context pool memory 10 for both unprocessed information and for activities suspended while awaiting occurrence of certain events or conditions. It also updates brain parameters both to keep them current and to check for relevant changes of substance.
  • Within the context pool, information is organized into variable-sized blocks, with all of it pre-classified or typed prior to submission. Some blocks contain inferred intent from sentences. Others contain commands, propositions, conjecture and other miscellaneous material. In its degenerate form, a ‘block’ may simply be a reference to a single neuron, and its firing level.
    TABLE 4
    Neuron Structural Content
    Neural Content Description
    Basic Information Basic information may include references to explicit spellings (e.g., a
    walk-back index to the text-tree for the word), pronunciation
    exceptions, visual-object descriptors and the like. Certain flags and
    start-indices for lexical matters and the like are also included here.
    Relational Linkages The weighted and conditional influence of this neuron upon another is
    defined by relational linkages, of which there may be up to 1000 or
    more, for some neurons. Each new experience and relationship learned
    has a relational linkage created for it.
    Initially, these relationships are created in the reinforcement memory,
    where they remain until later validated and moved to long-term
    memory (or are deleted). Relationals in reinforcement memory may
    refer to neurons in either memory, but those in long-term memory may
    refer only to other neurons in long-term memory.
    The Analyzer tracks the allocation, aging, validation, and ‘garbage-
    collection’ processes, and these are discussed in detail elsewhere.
  • Individual neurons are emulated by some fixed-size base information, and a variable number of relational connection records. The latter may be conditional, predicated upon the state of other neurons, and reference the ID indices of both their target and conditional neurons.
  • Context Pool Memory 10. The core of all emulation occurs in the context pool (short term) memory 10 and the analyzer/correlator 30. All information of immediate awareness to the emulator resides in that context pool 10. Neuron-like firing is implied by the very existence within the context pool 10 of a reference to a neuron from long-term memory 12. Information (blocks) enter the context pool 10 serially, as it were, but are processed in parallel by the analyzer 30.
  • Referring the context pool 10 in FIG. 4, data flows from right to left, as it were. Unless reinforced, all neuron data in the pool gradually ‘leaks away’ or dies away during its travel, aging it. Should the context pool fill, oldest (or left-most) data is simply lost, a case of information overload. Any data remaining in the context pool 10 that has aged without reinforcement can eventually decay to a zero-firing state, at which point it is simply removed from the pool.
  • Data may be placed into the context pool 10 from a number of sources, the initial one of which is often the semantic analyzer 50. Except for inputs from the analyzer 30, all context pool information is filtered by a clutter filter 40, which largely keeps irrelevant or non-interesting data from reaching the context pool 10.
  • Data in the context pool take the of form block-like structures of predefined format. A block arriving from the semantic analyzer 50, for example, contains the intent of a sentence, independent clause or sentence fragment. A one-word reply to a question is fully meaningful as such a fragment. Such a sentence block may contain references to a speaker, the person spoken to, and possibly, references to the person or object discussed. Many combinations of this and other sentence data are possible.
  • Blocks from analyzer 50 frequently include the purpose of the sentence, such as query (and expected type of answer), command, factual declarations, observations and the like. This type of data is discrete and readily identifiable by the semantic parse.
  • Other implied emotional information may be inferred from use of superlatives, exclamatories, and tone (if derived from an auditory analyzer 60). Auditory sources yield the speaker's nominal fundamental frequency and infer stress or emotional excitement by short or long-term pitch deviations accompanying spoken speech.
  • The length of the context pool is determined empirically by the application, but is nominally sufficient to handle a number of hours of intense study, or approximately a day of casual interaction. To put sizes into context, this represents roughly a megabyte of conventional digital storage, although selected size does not alter the means or methods of this patent.
  • During sleep times (or emulated extended rest), the context pool 10 gradually drains, with neural firings gradually fading to zero. As neural references fade to zero, they are removed from the context pool, as suggested bio-mimetically.
  • New information may be introduced during sleep by the dreamer block 75. Dreamer-derived information created during deep sleep decays rapidly when awake, at rates different from normal context pool data decay. If the sleep time is insufficient, yet-active neural firings remain into the following wake cycle; these are handled as previously described.
  • Language Syntax Analyzer 50. A language semantic analyzer 50 accepts communications in the natural language of implementation, English, for example. It breaks down sentences, clauses, and phrases to derive intent and purpose from the sentence. It uses the context of the current conversation or interaction by polling the analyzer 30, long-term memory 12 and reinforcement memory 11. Access to present context is obtained indirectly from the context pool via analyzer 30. Interpretation of language words is weighted by the presence of their associated neurons in the context pool, yielding context-accurate interpretations.
  • While language semantic analyzer 50 could be hard-coded in logic, it is beneficial for many applications that it be implemented as an embedded processor. This method is not required for the purposes of this disclosure, but is a convenience for the parse and interpretation of languages other than the initial design language.
  • Because all humans are essentially the same regardless of their national language and its grammar or semantics, the parameters described herein remain constant, while language semantic analyzer 50 language description script would change.
  • For convenience, statements emitted by analyzer 30 through interface 98 are created in analyzer 30. However, this function could be separated into a separate unit for convenience in altering the language of choice from English.
  • For a given language, semantic analyzer 50 recognizes a set of words that are an essentially invariant part of the language, such as with and for, in English. These play a substantial role in defining the grammar for the language. Nouns, verbs and adjectives readily change with the ages, but the fundamental structural words that make up the underlying grammar rarely do.
  • In addition to these invariant ‘grammar’ words, the structure of sentences, clauses and phrases define the remainder of the grammar. Analyzer 50 uses this overall grammar to interpret the intent of the communications.
  • Computer languages (non-natural languages) are often parsed by separate lexical and grammar parsers, using such commercial tools as Lex and Yacc. These were deemed burdensome and unwieldy for parses within the system of the present disclosure. For natural languages, an alternative parser (Lingua, a commercial parser and not the subject of this disclosure) was created. Using Lingua, a highly complete description of English grammar was defined and serves as the basis for language semantic analyzer 50. The intellectual property contained therein is a definition of English grammar itself, although it is also not the subject of this disclosure.
  • In the prior art, custom analyzers using large corpuses or dictionaries of words have also been employed for the parsing of English text. Unlike them, semantic analyzer 50 makes use of context-dependent information for a more accurate rendering of intent from the text.
  • Semantic analyzer 50 takes in natural language sentences, clauses, phrases and words, and emits blocks of decoded neuron references and inferred intent. In large measure, the non-changing and fundamental grammar words are discarded after they have served their purpose in the parsing. Similarly, structural constructs within sentences are often discarded after their implications have been gleaned. Finally, pronoun references such as he and it are replaced by references to neurons representing the resolution targets, such as “David Hempstead” or “rabbit”.
  • The semantic analyzer indirectly references both long term 12 and the “21-day” reinforcement memory 11, and can extract relational information from either, to determine meaning and intent of specific words. It places greater weight on words whose neural references are already firing within the context pool 10.
  • The definitions of English (or other natural language) grammar are contained in a definition file in a variant of the Baccus-Nauer Format (BNF). Refer to FIG. 5 for an example fragment of such a definition. The example was implemented using the Lingua compiler, a commercial product of Neuric Technologies. An example of diagnostic results obtained from parsing the sentence, “The table failed.” is given in FIG. 6, showing the iterative nature of the parser used in the commercial Lingua product.
  • It can readily be seen by one skilled in the art that the language analyzer 50 can be implemented variously without detracting from its placement and efficacy in the system of the present disclosure.
  • Sentence Blocks. For sentence processing, context pool 10 data may be blocked into inferred facts and data. Preprocessing in semantic analyzer 50 will have already converted sentence fragments into complete sentences, or will have flagged the fragments for expansion by the Conjector.
  • Each sentence block is usually a complete sentence, with subject and predicate. Implied you subjects have had the subject resolved and appropriate neuron reference substituted. The implied It is prefix, that turns a noun-clause (e.g., an answer to a question) into a full sentence, would also have been added as needed. All sentence blocks are standardized in form, with inferred sentence information reordered into that form.
  • The blocks are of variable length, with flags that indicate the sentence data being stored. Some of this information is gleaned from state parameters. The sentence type dictates which items are optional. Types include Declaration, Question, Exclamation, Observation, Accusation, Answer to Query, and yet others. Other sentence data may include the following (and other) information:
  • Subject
  • Subject Person: (1st, 2nd or 3rd)
  • Subject Count: (Singular, Plural)
  • Subject Gender: (Male, Female, Object)
  • Action or Step to Take
  • Verb
  • Object (including Person, Count, Gender)
  • Target of Action (including Person, Count, Gender)
  • All noun-like items also contain the person, count, and gender flags. These sentence blocks are interpreted by the analyzer/correlator 30 and the conjector 70 as commands for interpretation. Some of these are described in the discussion about Table 7 contents.
  • The Sentence Recognition Process. Regardless of whether the sentence was obtained through written text or from auditory speech, recognition and understanding of sentence content is roughly the same. The greatest differences are the additional cross-checks, validations, and filters imposed on spoken speech. For extracting intent from sentences, a general communications triad is defined: The speaker, the person/object spoken to (e.g., the receiver of commands), and the person, object or subject spoken of. Most of this information can be inferred from sentence content, from the present context pool 10, and from state parameters 20 and 23.
  • The basic process is:
  • 1. Parse—Parse the sentence using language grammar rules, such as in FIG. 5.
  • 2. Extract the Triad Corners—Identify shifts in the communications triad, if any. For identified shifts, advise correlator 30 by suitable command notifier in the context pool 10.
  • 3. Extract any Qualifiers—Compile qualifier clauses. If a definitive sentence, store the compilation, but otherwise evaluate the clause's probability to a single neuron, extracting both neuron references and data sufficient to create additional relational connections 1252.
  • 4. Extract Structural Elements Extract key structural elements, discarding semantic information. Store the data in appropriate blocks or neuron references for use by the correlators 30 and 75.
  • 5. Compile Definitives—Compile any definitive sentences into relational and qualifier constituents, storing the relational associations (if any) with the relevant fact neurons. This is done indirectly by submitting an appropriate directive to the context pool 10.
  • The above basic process is exemplary of a portion of the typical activity for parsing a sentence and generating information or command blocks for inclusion in the context pool 10.
  • Clutter Filter 40. Clutter filter 40 acts to limit entry of certain types of information into context pool 10. Information entering the context pool 10 must pass through the clutter filter 40, except for that emitted by analyzer 30. The purpose of the filter 40 is to remove extraneous neurons, such as language or grammatical tokens and non-significant gesture information. The clutter filter 40 follows preset heuristics which may either be fixed or adaptable.
  • The result of the filter is to maximize the consideration of relevant information and to minimize ‘mental clutter’ and things of little interest to the personality being modeled. Choleric temperaments, for example, do not thrive on human-interest information as the Sanguine does. Data so identified may be removed in keeping with current parameter conditions. This may occur during the course of conversational exchange, during which time semantic analyzer 50 or other sources flags the data on the basis of the topic of discussion.
  • The clutter filter is a substantial contributor to the emulation differences in right-brained and left-brained activity, second in this only to the work of analyzer/correlator 30.
  • During interaction with the outside world, a large number of neurons are referenced from memory and submitted to the context pool 10 for analysis, correlation, conjecture and dreaming. The filter considers the type and groupings of neurons being submitted, as well as some of the inhibitor factors, and may opt to discard them instead forwarding them to the context pool 10. During normal (non-sleep) activity, outputs from the dreamer 75 are given very low priority, unless overall levels of neural firings in the context pool 10 are very low.
  • Neural phrase results from the analyzer 30 always enter short-term memory directly, bypassing the clutter filter 40. By the nature, analyzer/correlator 30 governs overall thought (and memory) processes and normally does not produce clutter.
  • The filter also prioritizes incoming information. Entire contents of answers to questions are also likely to be passed through, whereas the same material might not ordinarily be.
  • The primary basis of determination of what constitutes ‘clutter’ is the personality parameters 20, a subset of the state parameters 22. (In FIG. 4, they are shown separately from other parameters for emphasis and clarity, but are essentially are the same.) Logic such as that of FIG. 3 demonstrates one means by which the clutter determination may be made. It will be obvious to one skilled in the art that the clutter filter 40 as described here can be augmented with additional rules and heuristics without altering the basic disclosures of this patent.
  • Analyzer/Correlator 30 The analyzer/correlator 30 is the heart of the emulated brain, and is the primary center of activity for thought processes. It is also the primary means for updating of all dynamic brain parameters and is the only means for initiating permanent storage of information.
  • Decisions are normally based upon ‘solid’ facts, information of high confidence or firings. Generally speaking, higher perceived quality of the source information yields higher quality decisions. In the absence of good information, analyzer 30 uses information from conjector 70, although results using the latter are also of lower quality.
  • Thought and decision processes are performed by the analyzer block, with supporting prompts and suggestions from conjector 70 and dreamer 75 blocks. The heart of the analyzer's work is done in context pool memory 10, such that all processes are performed within the context of the moment.
  • Entry of a neuron reference into the context pool memory 10 initiates a sequence of events unique to the neuron and its associated relational (experiential) linkages, or ‘relationals’. Detailed later, these often make use of the event queue memory 14 to handle the implications of their connections.
  • Initial Activity Upon Awakening. When awakened in the morning, the rested mind (that is, the context pool 10) is usually quite empty. Thoughts and cares of the past day are gone, or are so diminished as to not be readily recalled. Fragments of sentences, fleeting observations and incomplete or illogical ideas of the previous day have been purged, the mind uncluttered. This is the context upon awakening.
  • Daily activity in this brain emulation begins in a similar way. The initial tendency is to resort to routine, established lists of actions, usually by the timed fulfillment of events from the event queue 14. Activity can also be started by other external means in both human life and in this brain emulation. Table 5 lists some example ways that activity begins in the morning, but the list is of course by no means inclusive:
    TABLE 5
    Example Start-of-Day Activity Indicators
    Event Activity Initiated
    Hungry for Breakfast For the human, some form of routine that is normally undertaken, even
    if only the process of waking up, getting dressed and eating breakfast.
    Such a simple process is still a learned list, equivalent to one stored in
    the task list memory 13, though it also may not be consciously present
    in the mind. If nothing else occurs during the initial state of fogginess,
    the physical body soon makes known its need for food, and that
    initiates a tentative routine. If the emulated brain is connected to a
    robotic skeleton or vehicle, an equivalent for hunger might be depletion
    of fuel or electrical charge.
    Conversation or Sometimes the day is begun by someone else who interrupts the sleep
    Telephone Call with a request for attention, asking a question. This is equivalent to
    wake-up via external communications 93, or through speech or visual
    analyzer
    60. The sequence initiated by the conversation is a part of the
    thought processes. The sentence may be a command, a question or an
    observation.
    Uncompleted List Lists of things to be done at the close of the previous day are not
    always purged by sleep. They remain part of active context 10 of the
    brain. Carried into the next day with reduced clarity or importance,
    they are a basis for the first thoughts of the day. Timed or conditional
    items emitted to the event queue 14 may also be waiting.
  • Any of the above conditions places blocks of neuron references that take the form of sentences, event-based commands and other information to be processed. One skilled in the art will recognize that the analyzer/correlator 30 can be implemented as hard-coded logic, a form of command interpreter, or as an embedded processor without altering the means of this disclosure.
  • Outcomes of Analyzer/Correlator 30 Activity. As a consequence of its operation, analyzer/correlator 30 may include any of the activities of Table 6. The list is indicative of the types of outcomes and is not all-inclusive, but may be extended for the convenience of implementation. One skilled in the art shall realize that this does not alter the means of this patent.
    TABLE 6
    Outcomes of Analyzer Activity
    Action Description
    Fire a Neural In context pool 10, initiate (or increase) the firing of a neuron for each
    Reference new reference to it. Multiple references in the context pool 10 to the
    same neuron thus increase its influence.
    Reinforce Neural Neurons in t reinforcement memory 11 that have been freshly
    ‘Keep’ Count referenced are reinforced. Their time-weighted reference (‘keep’)
    count is maintained with the neuron in memory 11.
    Decay 21-day Periodically (e.g., during sleep intervals) decay the ‘keep’ count for all
    References neurons in the reinforcement memory 11, to enforce the need for
    reinforcement of learned information.
    Create a Permanent Neurons in reinforcement memory 11 that have satisfied their reference
    Neuron count level are made permanent by moving them to long-term memory
    12, updating their references, and removing them from short term
    memory.
    Initiate an Event Certain conditions, particularly due to neuron relationals, and some
    types of sentences, cause events to be queued to the event memory 14.
    The queuing is normally for execution after specified delay, awaiting
    the meeting of the conditions pending.
    Ask a Question Based upon need for more information, ask a question, formatting and
    emitting it through interface 98.
    Perform I/O or Initiate appropriate motor skill lists or handle computer-like I/O related
    Motor Skills to the application.
    Update a State Update relevant state parameters 22 based upon changes in internal
    Parameter conditions created by analyzer 30.
    Trigger Other Neural Initiate action in other blocks such as the task list memory 13, to
    Blocks initiate motor-skill activity or to perform memorized steps.
    Decayed-Neuron When firing value for a neural reference in context pool 10 has been
    Removal decayed to zero, remove the reference from the context pool.
    Neural Reference Periodically throughout the active day, neural references in context
    Aging pool
    10 are aged, reducing their influence. This aging is accelerated
    during periods of sleep.
    Conjecture Clutter Commands or references created by the conjector 70 are correlated for
    Removal relevance, and discarded for low relevance to the target subject(s).
    Dream Clutter While awake, information and command fragments from dreamer 75
    Removal are rapidly decayed. During sleep periods, perceived accuracy of these
    items is increased and treated as ordinary and factual information, but
    motor-skill related commands are suppressed.
    Expand Fragment Command the conjector 70 to expand a sentence fragment into the
    closest equivalent full sentence.
  • Besides the items of Table 6, analyzer/correlator 30 maintains and updates numerous lists, such as present subjects of conversation or inquiry, the status of pending answers to questions issued, maintenance and completion status of motor skill activity, and the like. Its primary source of information and commands comes from the present contents of the context pool 10.
  • Context Pool Commands. Within context pool 10, information and facts are stored in the generic form as neuron references, neural indices. Both state parameters 22 and context pool commands are encoded as dedicated lower values of neural indices. The commands are variable in length, with their index followed by length and supporting information.
  • Many synthesized commands derive from the parsing of sentences by language analyzer 50. Sentences may be distilled into multiple commands, each complete with neural references. Implied subjects, verbs or objects are resolved with references to relevant neurons. For sentences with multiple subjects, verbs or objects, the sentence content is replicated, with one copy per item in the subject list, for example.
  • Some commands found in context pool 10 are given in _Ref90637160˜. The list is exemplary and not exhaustive. One skilled in the art will realize that the list may be extended without altering the means of the system of the present disclosure.
    TABLE 7
    Example of Context Pool Commands
    Command Remarks
    Initiate Motor Skill From a command or a list item
    Await Completion Suspend topic activity, awaiting completion.
    Await Factual Answer Question was asked that expects factual information.
    Await Affirmative Answer Question was asked that expects a yes/no answer.
    Seek Information Ask a question to resolve ambiguity or missing
    information.
    Correlate Answer Process anticipated answer
    Initiate Definition From definitive sentence
    Execute Command From imperative sentence
    Repeat Until Condition Perform an iterative operation or analysis.
    Note Declarative Handle declarative sentence or observation, setting
    relevant expectations.
    Note Exclamatory Handle exclamatory sentence, updating relevant emotional
    states.
    Update/Add Topic Refresh list of topics and update relevance of the list
    items.
    Update the Update the list(s) of who is speaking (speaker), who is
    Communications Triad being spoken to (target) and the object(s) of conversation.
    Note Accusation Handle accusatory statements, updating emotional state
    and emitting conditional events to queue 14 to prep
    answers to implied questions.
    231 Declarative Command to handle state of being, remarks or
    commentary
    232 Imperative Command to self to do something
    233 Definitive Command to define something
    234 Interrogative Command to respond to a question
  • For convenience, all data structures in the context pool 10 look like neuron references.
  • Execution commands are always flagged by their source, such as a speech or grammar analyzer, the Analyzer or Correlator 30, the Conjector 70, Dreamer 75 and so on. The Analyzer 30 later considers the source when applying the command during its thought or decision processes. Exemplary commands from semantic analyzer 50 are given below, these particular ones being based upon sentence types.
  • Declarative 231 is an instruction to consider a present condition about the subject. It may also be a part of an experience process, ultimately culminating in the creation of a neuron-to-neuron or neuron-to-state-parameter relationships. This command is usually created by the parsing of a sentence, but can also be created by thought processes within analyzer 30.
  • Declaratives may result in a remembered relationship, in time and with reaffirmation, and through conjector 70's action. That is, declaratives are ‘taken with a grain of salt’, and consider confidence in the source of the observation. They differ from the definitive 233 in that the latter is already presumed to be a source of facts, and only the reliability of (confidence in) the information needs to be confirmed before remembering it.
  • For example, “Four cats are sufficient to eliminate mice from large barns,” is a declarative that proposes how many cats it takes to get the job done. Before analyzer 30 assumes the statement to be factual and remembers it, it will consider its confidence in the source of the remark, and whether or not the information is reaffirmed.
  • Imperative 232 instructs analyzer 30 to the brain emulation to do something, such as to consider a proposal, pay attention, recall something, or to conjecture an answer to an issue with insufficient information. It is a command for action of some type, directed towards the brain emulation.
  • A command such as ‘Come here!’ must be evaluated in the present context. It implies activation of a motor-skill list to begin physical motion, and targets the location of the speaker. The latter may not be in the context pool 10, but is maintained in a state parameter 22. In this case, analyzer 30 directs the motor skill via task list 13. It can then, for example, issue an await-on-completion event 142 and dismiss the command from memory. It will later receive a completion message (or a notation that it encountered a brick wall or other impediment to carrying out the instruction), closing the command.
  • Definitive 233 indicates definition of a fact (in reinforcement memory 11), and may include auxiliary conditional relational information. Example, “A cat is an animal with four paws, of which the front two are commonly called forepaws,” is a compound statement. The statements share a common subject, and have separate definitive 233 (“A cat is an animal with four paws”) and declarative 231 (“The front cat paws are commonly called forepaws”) clauses. Semantic analyzer 50 separates the compound into separate commands for each clause.
  • Declarative 231 portion, “A cat is an animal withfourpaws,” defines these neurons if they are not already known: Cat, Animal and Paws. Even if the meanings of Animal or Paws are unknown, they can still be remembered, and the suitable relationals later formed between them. These are all recorded in reinforcement memory 11, if not already there and not known in long-term memory 12.
  • If already in reinforcement memory 11, their existence is reaffirmed to encourage possible permanent recollection. If the veracity of the speaker is high, less time is required to reinforce the facts. If the system is in preemptive training mode, these are assumed to be pristine facts, perhaps from God, and are immediately and permanently remembered.
  • The declarative 231 portion, “The front (cat) paws are commonly called forepaws,” also forms a definition, but must be reaffirmed to a greater degree than for the definitive clause. (Because parsing has already been performed, the explicit subject defined at the start of the sentence has already been associated with the trailing clause, too, by semantic analyzer 50.)
  • Because ‘The’ is present, the clause is declarative 231 rather than definitive 233. This is because the reference is to a specific cat, rather than to the generic cat animal. One skilled in the art is aware of these subtleties of English grammar, and how that grammar may be used to determine the intention and type of sentence.
  • Interrogative 234 poses questions and requests. These are normally injected into context pool 10 by the grammar semantic parser 50, but may also be queries from other sources. Many (but not all) questions are simply a declarative statement with a question indicated, and are often formed by a restructuring of a simple declarative sentence.
  • The parser 50 sorts questions into those seeking affirmation (yes/no) or seeking specific information, and presents them to the context memory 10 as declaratives 231 marked for validation or as an imperative 234 demanding an informative response. In either case, analyzer 30 only sees data constructs for the latter forms, and so marked as questions so that it can form its response to the question.
  • Other internal commands are also added for sake of convenience, analyzer 30 loosely taking on the form of a von Neumann processor, with the ‘program’ being the command stream from the English parser, or from other blocks.
  • In communicating with brain emulators that share common memory 12, their analyzer 30 can forward ‘digested’ command blocks directly to the context pool of this emulator. If communicating with the outside world via external interface 98, analyzer 30 reformats the command block into an English sentence for parsing there, and receives English back via interface 93.
  • Neurons and the Context Pool 10. Conditionals expect a specific neuron (or combination of neurons) to be fired. State parameters 20 and 22 are pseudo-neurons, and preexist all allocated neurons. They are treated as neurons, and are assigned the lowest index ID numbers, but have no relational (experiential) links created for them. The ID of every firing neuron (except for state parameters 20 and 22), along with some information specific to the neuron, is maintained in the context pool 10, including the degree of firing.
  • Aged neurons in context pool 10 that are no longer firing are eliminated from the pool memory, usually while ‘sleeping’. Neurons that are firing but are not being reaffirmed or re-fired in the context pool 10 have no effect, other than to establish the context of the moment. For example, they may be the subject of a conditional test, or may alter the contextual meaning of a sentence being parsed.
  • Unidirectional Relationals. Where relationships are unidirectional, a relational attached to the ‘causing’ neuron issues an event, but only if the specified condition is true. For unidirectional relationships, A implies B, but B does not imply A. In either case, the relationships may be conditional, predicated on other neurons also firing. Referring to FIG. 10, a relational link 1253 is created within the neuron impacted by the relationship.
  • Bidirectional Relationals. Where relationships are bidirectional, neurons or state parameters at both ends of the relational will issue events. If any conditions specified are not met, no event is fired off. For bidirectional relationships, A implies B, and B implies A. In either case, the relationships may be conditional, predicated on other neurons also firing. Referring to FIG. 10, a relational link 1253 is created within the both neurons in the relationship, each referring to the other.
  • Relationals that Emit Events. When a neuron initially fires (or is reaffirmed), analyzer 30 scans its list of attached relationals. They are organized as AND-connected lists optionally separated by OR markers. Consecutive relationals are evaluated until one of them fails or until an OR marker is encountered. If a relational fails, subsequent relationals are ignored to the next OR mark or end of the list.
  • On failure, encountering an OR marker resets the failure condition, the OR is ignored, and testing resumes at the relational just beyond the OR.
  • If the end-of-list is found first after a failure, no event is generated. Finding an OR (or finding an end-if-list, with all previous tests successful) implies that all AND-connected relational conditions were met, so an event is created. Conditional relationals may be flagged with a NOT, implying that the converse of the condition must be true for the relational to succeed.
  • Other Internal Lists. Analyzer/correlator 30 maintains other lists of information in short-term memory similar to that of the state parameters 22, which are also treated as blocks of predefined neurons. These have been discussed elsewhere within this patent and include list such as the following:
  • Topics of Discussion
  • Motor Activities in Process
  • Events whose completion is being awaited
  • Multiple objects to apply sentence to
  • Multiple verbs applying to the sentence
  • One skilled in the art will recognize that the above list is by no means inclusive, and the logical or physical placement of the above lists may be altered, or the list added to, without changing the methods of this patent.
  • Walking the Neural Connection. When a new command is added to the context pool 10, it usually contains a reference to a neuron that represents a fact or condition of existence. Usually it will reference more than one. Each such reference either brings the neuron ‘into the pool’, or reaffirms neurons already in the context pool 10.
  • Simply referencing a neuron causes analyzer 30 to bring it into the context pool, even if not firing very strongly. Some command blocks, such as from a definitive clause, greatly increase the level of firing. Multiple references to the same neuron over relatively short duration, increases the firing level, also, up to the 100% level.
  • Recognition of a person's face, for example, brings the ID of that person into the context pool 10, firing the relevant neuron in accordance with the degree of confidence in the recognition. (e.g., “That might be Jackie, over there.”) Shortly thereafter, hearing the same person's voice increases the confidence of the identification. The firing of that person's neuron (ID) may therefore increase from perhaps 65% to 95%. Ongoing interaction with that person keeps his ID alive in the context pool 10.
  • Correlation of Relational Information. When in-pool neurons fire, other neurons may be implied by known relationships. For example, Green and Animal might imply a parrot if either Cage or South America is presently in the context pool 10. Otherwise, if Swamp is firing, Alligator may fire. Analyzer/correlator 30 gathers triggered references into context pool 10, updating neuron firings in a manner specified by the scaled connection weight.
  • For the case of such relationally-initiated firings, firing level is controlled by the values of the referencing neurons (e.g, Green, Animal or Swamp), and the weight given in the relational connections. That is, the Alligator neuron will fire weakly if Florida (which might imply Swamp) is firing weakly, although nothing else directly activated Swamp. Analyzer 30 effectively acts as a correlator by walking through the connections of all firing neurons, awakening other neurons as long as firings are not suppressed by conditional relationships.
  • Referring to FIG. 7, if Dog 121 and Excitement 122 are both firing (e.g., information inferred from a parsed sentence), references to them are placed into context pool 10. The relationships of FIG. 7 would set expectations for a dog to bark via neuron 123. Weights 124, which may differ from each other, are multiplied by the firing levels of 121 and 122, respectively. If the resultant firings both exceed some minimum decision threshold, the AND operation 125 causes the generic Dog-Bark neuron 123 to fire. A reference to neuron 123 would then be inserted in the context pool 10, possibly initiating a motor skill event to cause a bark, for example. It should be obvious to one skilled in the art that many variations of FIG. 7 are possible without altering the means of this disclosure.
  • Again, the analyzer 30 causes any neuron not reaffirmed or re-fired over time to gradually decrease its firing level. That neuron is then ejected from the context pool 10 if it goes to zero. It is also dumped from memory if it is still firing but has been there a long time and the context pool 10 is full.
  • The Long-Term 12 and Reinforcement Memories 11. Reinforcement memory is a way-point in the process of learning and remembering things. All new information and relationships are established in reinforcement memory 11, and it serves as a filter for items important enough for later recall. Analyzer 30 handles this process.
  • The reinforcement memory 11 is a means of eliminating non-essential facts, relationships and incidents otherwise uselessly cluttering permanent memory. The ultimate growth of long-term memory 12 is then moderated, keeping the mental processes and memory more efficient.
  • Much of the information and experience we encounter is incidental and not worth recollection. For example, paper blowing in the wind is recognized for what it is, but the incident is too insignificant to remember, unless perhaps the context is the distribution of propaganda leaflets. The latter might be worthwhile musing over. Reinforcement memory 11 is the interim repository for this information, while its worth is reaffirmed or forgotten. Analyzer 30 permanently moves validated facts and relationships to long-term memory 11, as discussed elsewhere.
  • The long-term memory 12 and the reinforcement memory 11 share a more or less common format. Allocation of neurons and relationals are handled entirely by analyzer 30, and policies that govern permanent retention reside there.
  • Information is validated by analyzer 30 as ‘memorable’ when it was repeatedly referenced over a 21-day period, or repeatedly during exercise of strong emotion or trauma. So validated, the analyzer 30 moves it to long-term memory 12. Referring to FIG. 8, associated relationals are also moved from reinforcement memory 11 to the long-term side. Both memories consist of the following items:
  • An ID Table 126
  • A Table of Neurons 125; and
  • Other emulator-specific tables
  • “Other” tables include specialty tables associated with a single neuron and used for recall of motor-skill task lists, aural or visual artifacts or objects, and the like. Their format is specific to the emulator type (e.g., visual, speech or motor-skill) that produces them, but they follow the standard processing and correlation rules for ordinary neurons.
  • No neuron is special of itself. Rather, it takes meaning and worth from position and interconnection with other neurons. For example, a Laptop neuron is meaningless of itself (except for spelling, pronunciation and visual shape), but has importance because of its relationships to Computer, Portable, and Convenient.
  • Handling of Idioms for any Natural Language: One skilled in the art is familiar with the various methods of parsing of natural language sentences, and many tools or methods are available to do that. For the convenience of description, such a system is assumed to exist for ‘parsing’, the breaking of sentences into their constituent parts. Even the English language is used here by way of example, one skilled in the art will immediately realize that the same general techniques of parsing and the handling of idioms as described here can also be applied to almost any other human language.
  • It is a generally accepted technique to have a (perhaps proprietary) description of English, frequently one of a top-down nature that first describes sentences, then their subject, predicate, object and indirect object, and so on. For example, one could describe a sentence using a formalized grammar in the following manner:
  • Sentence=Subject Predicate
  • Subject=Noun or Noun_Equivalent
  • Predicate=Transitive_Verb Direct_Object, or, Intransitive_Verb Indirect_Object, or, Intransitive verb.
  • Of course, this system permits the creator of a parser to “drill down” to any desired level of detail, and can be extended as desired to accommodate all parts of speech. It is generally accepted that such formalized description of a grammar is then automatically translated into some computer language. Example commercial tools that do this include Lingua, Yacc and Lex. Such tools then create computer code (for example, in C++) for a program that parses a sentence.
  • It is assumed that one neuron is preferably allocated to each word in English, at least for words presently recognized by the underlying parser. Such a neuron is merely a place-holder and has no intrinsic value of itself, but its importance is its connection to other such neurons. The system of interconnected relationships between the neurons is a fundamental part of what constitutes memory for the artificial human.
  • For example, the word muddy can be construed as any of, “to contain mud”, “to be mud-covered”, “mud-like”, “unclear” or other concepts. In actuality, it is not necessary to have multiple neurons for “mud”, “muddy” or “muddily”. Rather, it is sufficient to have only one neuron, for mud, and create associations relationships/associations between neurons, for concepts such as muddy boots. In this case, it a conditional link could be created between mud and boot neurons. In that manner, simple reference to the term boot can evoke the idea of mud. Conditional relationships can also be created between boot and rain, boot and snow, and so on.
  • Idioms can be handled in a similar manner. A single neuron can be assigned to represent a multi-word idiom, and associations can be then drawn between that “idiom” neuron and other neurons whose meaning (and relationships) are already established. For example, off the wall can be stored as a single neuron (with the text, “off the wall” attached to it), and that neuron can then be conditionally linked to strange, unusual and offbeat neurons.
  • One skilled in the art will realize that such associations can be unidirectional or bi-directional, and may be conditional or unconditional. For example, mud can be associated with boot such that reference to boot implies mud, but such that mud need not imply boot. As described herein, multiple associations may be drawn between a neuron and one or many other neurons. Each specific association is uni- or bi-directional and is predicated on the firing of some other neuron, or not.
  • Parsing: Traditionally idioms can be parsed as a sequence of individual constituents. Ascertaining their meaning from their individual constituents is extremely difficult, however, if not impossible. True idioms show three qualities, all causing difficulty with parsing:
      • Non-compositionality: The meaning of an idiom is not a straightforward composition of the meaning of its individual constituents. Under the weather has nothing to do with something being located under weather.
      • Non-substitutability: One cannot substitute a word in an idiom with a related word. Below the weather is not the same as under the weather, although both under and beneath are synonyms.
      • Non-modifiability: One cannot modify an idiom or apply syntactic transformations such as Luke is under the bad weather, or the weather Luke is under. These have nothing to do with being sick or not feeling well.
  • The formal language description, and parsers created from it—normally deal with a single word at a time. They infer a word's usage both from context within the sentence and by the word's classification, such as may maintained for that word in the neuron (or elsewhere), as appropriate.
  • A scanner device is required that can search for either an explicit sequence of words (e.g., “off the wall”), or—failing that—for a single word. If such a sequence is indeed found, such as might be associated with an ‘idiom’ neuron, that neuron is noted and may subsequently be fired, turned on. If only a single word of the sequence is matched against a neuron (or an item's text sequence), e.g., “off”, then the neuron for the offword is identified instead.
  • By performing this two-part lookup, the example word off can be readily classified as a single word, or as a multi-word idiom, off the wall. Either such neuron may/will contain associations with other neurons, and those associations give the final meaning to the word or idiom.
  • The relevance of the distinction in the above noted lookup process is that the entire phrase that constitutes an idiom can be treated in precisely the same manner as a single word, both in the parser and in the associated Al logic. That is, the implications of an idiom phrase are handled in exactly the same manner as implications of a single word's ‘neuron’. The context where the idiom (or word) is found determines which other connected neurons also get fired.
  • For example, the neuron for under the weather is likely to be connected to (and fire) a neuron that implies ‘ill health’ or ‘not-feeling-good’ neuron, or to reduce the firing of a ‘sense-of-health’ neuron. Obviously, one skilled in the art will realize that other neurons to be fired by the recognition of an idiom is determined both by the context within the sentence and by the present connections (knowledge) of the neural network in which it is implemented.
  • Using the above techniques, it is not necessary to individually parse the constituent words of an idiom and then attempt to establish interpretation of meaning. Rather, the matching of the idiom's sequence of words establishes the neuron, context and associations that give the idiom its cultural meaning. Idioms have long posed a great difficulty with parsing because of the above three qualities. Without our method they will no longer cause this problem. All of this has been implemented at the parsing and neuron-connection level, and without extra-ordinary care or logic being necessary.
  • The following sections discuss one specific implementation of emulator structure. One skilled in the art will realize that the technology of implementation is secondary to the means described herein. Many of these items will be tweaked or implemented variously as the underlying technology of implementations varies, such as software emulation, FPGA, gate array, embedded processor, analog relational arrays or optical logic.
  • The ID Table. Referring to FIG. 8, every neuron is assigned a serial number 127, something of no significance in itself. Each relational connection to another neuron uses that unchanging serial number as an ID. From the ID, spelling, pronunciation and other relevant information is obtained.
  • When memory is implemented as digital memory, the ID table 126 is located preferably at the base of that memory and consumes a predetermined and finite logical space. It is sized to have one element for every possible neuron. In reality, memory can be resized as more is made physically available, with suitable offsets applied to the resolution value for each ID in the table 126. For each index 127, the corresponding offset into the ID table 126 contains a neuron's address in the neuron table 125.
  • A vocabulary of 30,000 words is an acceptable working size when words alone are considered. For some people, up to 300,000 unique words are known. Each concept, e.g., “off the wall” to be remembered has its own index, as do words, remembered events or conditions; each corresponds to a unique neuron record 1250 in the neuron table 125.
  • Experiences may or may-not have their own index, depending on what they are and how they were formed. Because of this, it is therefore realistic to have an index table 126 of 8-20 million items or more, for example.
  • Table of Neurons. Referring to FIG. 9, neurons 1250 are emulated by fixed-size information block 1251, and a variable number of relational connection records 1252. The latter may be conditional, predicated upon the state of other neurons. They may reference the ID indices 127 of both their target and conditional neurons. With better-suited hardware memory technology available, such as capable of directly forming relational linkages between neurons, these technology-dependent linkage-pointer structures may be superfluous and may be eliminated or replaced.
  • Basic information 1251 may include references to explicit spellings (e.g., a walk-back index to the text-tree for the word), pronunciation exceptions, visual-object descriptors and the like. Certain flags and start-indices for lexical matters and the like are also included here.
  • The relational 1252 is a link between two neurons. It may also be a link between a neuron and a state parameter. Relationals may be unidirectional or bidirectional in nature, and may be performed only if a specified set of conditions are met. Relationals are loosely suggested by the biological neural dendron.
  • When implemented in digital memory, it is convenient that relationals 1252 are allocated in the space immediately behind the fixed-length portion of a neuron record 1251. Normally there a blank space is reserved there in anticipation of relational records insertions. Before inserting a new relational 1252, analyzer 30 checks for sufficient room and, if not, reallocates the entire neuron with greater space.
  • The length of the relational detail block 1252 is variable, depending upon the type and number of relational connections made to other neurons. It not unreasonable that total (digital) memory may consume sixteen (16) megabytes to two (2) or three (3) gigabytes.
  • Relationals 1252 have an AND-OR organization. AND-connected relational records are grouped together following the fixed-length portion of the neuron.
  • Referring to FIG. 10, a specific target ID 1256 is generically defined to represent the OR condition, with the remainder of that ‘relational’ record ignored. As stated elsewhere in this discussion, certain neuron IDs are reserved for such special purposes as this. Similarly, certain values of the weight 1257 are reserved to indicate an INHIBIT condition, and the weights themselves may be negative, to reduce the level of recognition, the firing level.
  • By itself, the relational 1253 is unidirectional. The neuron 1250 it is a part of is fired to the degree that the neuron referenced by target ID 1256 fires. However, the firing of this neuron 1250 does not otherwise affect the target ID 1256. For example, Grass may imply Green, but Green does not imply Grass.
  • For conditions in which a relationship is bidirectional, analyzer 30 creates a suitable relational for each of the two neurons, each pointing back to the other. This is akin in software to a doubly-linked list.
  • The weighted and conditional influence of this neuron upon another is defined by relational linkages 1252, of which there may be up to one-thousand (1000) or more for some neurons. Each new experience and relationship that is learned has a new relational linkage created for it. The garbage collection and management of neuron-relational memory spaces is discussed elsewhere in this patent.
  • Initially, new neurons 1250 and relationships are created in the reinforcement memory 11, where they remain until later validated and moved to long-term memory 12, or are deleted. Relationals 1252 in reinforcement memory 11 may refer to neurons in either memory, but those in long-term memory 12 may refer only to other neurons in long-term memory 12. Analyzer 30 tracks allocation, aging, validation, and ‘garbage-collection’ processes, as discussed in detail elsewhere.
  • Other Tables. Besides pure neurons or relationals 1250, both reinforcement 11 and long-term memories 12 may hold other encapsulated information. These data blocks are treated and referenced as ordinary neurons, but contain extended structures for efficient later recall of compound and complex entities. Details of each of these are discussed with the description of their relevant neurons.
  • The neuron process for recognition of sight and sound is by reconstructive correlation, matching a reference image, or sound against a known object or sound. Memory storage is ‘reconstructive’ in that actual sampled sounds or pixilated images are not stored. Rather, sufficient information to reconstruct a reference object (for comparison purposes) is remembered. Stored images and sounds then consist of lists of object artifacts rather than detailed information on them. The degree of match or similarity determines the neuron's firing level.
  • Refer to Table 8 for a list of some common supporting tables. The list is by no means complete, and one skilled in the art will realize that there are many ways to organize such information into tables without altering the means of this disclosure.
    TABLE 8
    List of Some Common Supporting Tables
    Table Description
    Task Lists (e.g., These are lists of actions to be taken, to carry out repetitive or learned
    Motor Skills tasks. They are specific to supporting emulators, such as those that
    handle motor skills or musical abilities. Task lists are usually coupled
    tightly to sensory processes, and can be started, interrupted or stopped
    by the main brain model.
    Aural Artifacts These are descriptors of basic sounds, including such things as
    phonemes, ADSR rules and the like. They are not complete words or
    sounds.
    Aural Interpretive This is a list-like set of rules for the interpretation of spoken speech,
    Rules and augments the algorithmic-based lingual processes.
    Visual Artifacts This is an arbitrary set of visual elements used to recognize more
    complex objects. The artifacts may include lines at various angles,
    facial and nose shapes, alphabetic outlines, and the like. They are
    elements used for the reconstruction of visual images, of the minimum
    detail needed to perform image correlation matching.
    Visual Objects These are descriptions of complete visual objects, but of minimal detail
    needed to recognize them. For example, to recognize a specific face,
    only a portion of the eyes, nose and chin or cheekbone may be
    required. This reconstruction object is connected to the neuron for a
    specific person, for example, attaching the face to its identity. The
    connection is done via bidirectional conditional link.
  • Recognition and re-creation of visual objects are different processes, and must be optimized separately. Biological function suggests that humans do not store detail, such as a bitmap image. Yet, they can certainly recognize a detailed object, and can accurately identify it when exposed to it. A correlation template is recreated from the stored table information and applied to the appropriate correlator. This may be, for example, a vector skeleton for use by the visual correlator for image identification. The neuron fires in proportion to the degree of match.
  • Event Queue and Memory 14. Events are special-purpose commands issued to a queue 14. They are slated for later execution at a specific time, after a specified delay or after a specified set of conditions are met. They are the means by which unwanted looping over information in the context pool memory 10 is circumvented.
  • An event is simply a marker or flag set down to remind the system to do something when a specified condition is met. It greatly simplifies the handling of actions that are asynchronous with each other. When the analyzer 30 discovers new information in the context pool 10, it may issue one or more events to the event pool 14. For example, the analyzer may create an event that adds new reference back into the context pool 10. It could also issue a conditional event to later force the analyzer 30 itself to iteratively rescan the context pool 10, such as may be done for an analytical temperament such as the Melancholy.
  • The same mechanism is also used for establishing conditional relationships between neurons, or between neurons 1250 and state parameters 22. Events can be generated by the alteration of state parameters 22. By issuing events for future execution, the analyzer 30 avoids getting side-tracked from the task at hand being worked.
  • Referring to FIG. 11 and FIG. 4, the event queue 14 consists of an interpreter 140 and an event list 141. Creation of an event causes an event 142 to be inserted in the event list. Events 142 in the list 141 consist of a command field and other optional fields shown in FIG. 12. The interpreter 140 repeatedly scans the event list 141 for events 142 that can be processed. Whether or not they can be processed is determined by the conditions and timing fields. The auxiliary data field, if present, contains information unique to the event type. Once an event 142 has been processed, it is removed from the event queue 14.
  • After interpreter 140 has scanned to the end of event list 141, it restarts scanning at the beginning. If no events 142 are left to process, it awaits the creation of a new event 142. One skilled in the art will realize that the event queue 14 can be implemented as hard-coded logic, as a micro-coded processor, a software emulation, an embedded processor, FPGA, ASIC, optical or other technology of choice, without altering the means of this disclosure.
  • Conjector 70. Conjector 70 proposes decisions based upon incomplete or partial facts, or facts of low confidence. While the analyzer 30 is the main thinking facility for the emulator, it takes advice and proposals from both the conjector 70 and dreamer 75 blocks. Proposals from the conjector 70 are filtered by clutter filter 40 on the basis of temperament and personality.
  • During the processing of sentence data in the context pool 10, analyzer/correlator 30 acts on the sentence block to determine a suitable course of action where appropriate. If it ‘comes up dry’, the analyzer 30 invokes the conjector 70 to suggest a valid meaning. If the resulting quality of the conjector 70 output is too low, analyzer 30 may direct the communications interface 98 to ask for clarification. It sets an appropriate parameter flags to await an answer to the question of clarification.
  • Conjector 70 output is similar to any normal neuron reference or sensory nerve that is firing at a relatively low level for the topic. Other than being flagged as coming from the conjector 70, output of conjector 70 is essentially identical to data inferred from sentences by semantic analyzer 50.
  • The conjector 70 behaves in a similar manner to the analyzer 30, except that it only looks at material in the present context pool 10. It is not bound by the same needs for hard facts as the analyzer 30 is, and effectively offers subjective information for consideration. Its proposals are largely ignored by the analyzer, except for cases such as the following:
      • Information is missing or incomplete.
      • Questions posed by the analyzer through the communications interface 98 are yet unanswered within the expected interval.
      • Overall level of confidence (firing) levels of information in the context pool 10 is low.
  • In effect, when answers are not available to the analyzer 30 from existing information, the analyzer turns to the conjector 70 to fill in the blanks.
  • For its operation, conjector 70 reviews outstanding questions or issues, as defined both in the context pool, supporting tables and appropriate state parameters 22. Some state parameters 22 rack the present topical subject(s), questions being asked, and information presently being sought by analyzer 30. On the basis of this material, it scans even low-firing neuron references and commands within the context pool 10 and proposes (conjectures) answers for the analyzer 30.
  • Respect by analyzer 30 for conjecture is implied by the weighting placed on it. Proposals are ignored if they conflict with other information, or if better (stronger firing) information becomes available. Conjectures age rapidly and are soon forgotten from the context pool 10, whether or not acted upon. The analyzer 30 considers the source of the conjector 70's ‘information’ and its levels of confidence (firing levels). It then establishes its own need for the proposal, and its own level of confidence in the data. Rejected conjecture is immediately deleted.
  • One skilled in the art will realize that conjector 70 can be implemented as hard-coded logic, as a micro-coded processor, a software emulation, an embedded processor, FPGA, ASIC, optical or other technology of choice without altering the means of this disclosure.
  • Dreamer 75. Dreamer 75 functions as the ‘right side’ in the brain emulation of this disclosure. It peruses neuron references in context pool 10 and uses different weightings for state parameters 22 than used by analyzer 30 for its inputs and decision processes.
  • The dreamer 75 influences the analyzer 30 primarily by injecting fired neuron references into the context pool 10, rather than just structured commands such as from the semantic analyzer 50. Where pre-existing information in the context pool 10 comes from visual or aural sources 60, or from visual neuron correlations, the dreamer 75 may output proposals in the form of command blocks.
  • Similarly to correlator-analyzer 30's processing methods, the dreamer 75 generates new references and commands based upon existing neuron firings. However, when traversing the neuron relational chains, lower regard is given to relational conditions 1252, as in FIG. 9. The resulting outputs are of low reliability, as indicated by both their source and its firing levels. When analyzer 30 is otherwise inactive or is in sleep mode, the dreamer 75 may indirectly alter the subject topics by issuing events to event queue 14. Due to the ‘noise’ levels involved, the dreamer 75 may rapidly flit from topic to topic. The dreamer 75 also remains active when the brain emulation is otherwise in a ‘sleep’ mode.
  • When subsequently processing context-pool 10 data created by the dreamer 75, analyzer 30 does not create new neurons or relationals in the reinforcement memory 11. Upon awakening from sleep mode, the analyzer 30 also rapidly purges residual dreamer-generated ‘information’ remaining in the context pool 10.
  • The dreamer 75 therefore behaves as a ‘movie-maker’ of sorts, unconstrained by relational logic. It creates new ideas loosely based on the context of the moment, ideas that also have very rapid lifetime decays. While this firing of neurons is not in a logical or cohesive way, it still influences decisions and analyses made by the analyzer 30.
  • Dreamer 75 is algorithmically based, statistically ignoring strong-firing neurons and applying logarithmic weighting to firing neurons as a part of its own processes. In this way, dreamer 75 peruses the context pool 10, effectively giving weight to neurons barely firing.
  • The impact of the additional neuron firings in context pool 10 is that the dreamer places greater overall weight on neurons than the analyzer 30 would have. During the course of activity, the firing of some neurons will be enhanced because of the multiple references to those neurons. Analyzer 30 appropriately weights information flagged as coming from the dreamer 75, and continues to apply its normal logic to the data. Where it is seeking new ideas, it will weight dreamer-induced references higher than it ordinarily would.
  • Because dreamer 75 operates at lower effective thresholds than useful for analyzer 30, it is more prone to ‘noise’ and error than is the analyzer 30. While its outputs are less reliable insofar as decisions go, its purpose is different. During non-sleep operations, dreamer pseudo-information passes through clutter filter 40 where it may be rejected by the personality and temperament filters. During non-sleep operations, the clutter filter rejects more dreamer 75 output by altering rejection filter thresholds.
  • One skilled in the art will realize that dreamer 75 can be implemented as hard-coded logic, as a micro-coded processor, a software emulation, an embedded processor, FPGA, ASIC, optical or other technology of choice, without altering the means of this disclosure.
  • Speech and Visual Analyzers 60. The emulated brain of the present disclosure may be applied to a mechanical system, whether a skeleton or vehicle, and list-based motor skill learning functions are used. Interfaces from task list handler 13, event handler 14 or analyzer/correlator 30 can be used to control external hardware. These interfaces can be used to apply specific levels of force, when used with closed-loop feedback, or a specific mechanical position, with or without feedback.
  • Sensors used for the feedback systems are determined by the application. For example, placing one's hand on a table requires either a' priori knowledge of the table height and position, or requires feedback such as derived from the eyes. Suitable sensors might be a pressure sensor for the nose (so one doesn't bump into a wall more than once) or for the hand. Aural sensors provide feedback to ascertain the proper formation of sounds, such as to sing on key with existing music.
  • The methods of this disclosure create correlation templates or proposals, visual or aural objects presented for correlation against visual images or sounds. Binary search methods are used to select the proper template for correlation, to rapidly determine degrees of recognition. The correlation method constitutes a processed sensor, a sensor with internal ability to ascertain degrees of recognition.
  • Non-processed sensors are simple temperature, pressure, humidity or light intensity measurement devices, whose outputs are simply formatted appropriately for input to an interface. Processed sensors require interpretation and possible correlation before they can develop meaningful signals. For example, using any number of algorithms, a visual sensor takes a template image and returns the degree of correlation in the present image. Similarly, processed aural sensors take a prototype, such as for a phoneme, and return the present degree of correlation. Phoneme variations may be proposed if a matching word has its neuron firing in context pool 10.
  • Speech and visual analyzers 60 use task list or other memory such as 13 to retrieve the next sequential image templates for correlation as proposed by analyzer 30. These are conveyed as present settings of the relevant state parameters 22. For example, some motor skills demand visual feedback for the recognition of a table, its upper surface position, and the position of that portion of the hand to be placed there. These separate objects that must be recognized in turn by the visual correlation processes.
  • When the table top has been identified, its position must be reported to the context pool 10, as is the position of a suitable landing site on it, the proper area prescribed by the analyzer 30's intention and desire. The outputs of visual correlation are conveniently made relative to the location of the skeleton's eyes, such that correction for hand motion can be made.
  • Particularly for the visual recognition processes, motor skills require feedback for position, rate of travel, distance and the like. From a single sensor (e.g., a pair of camera ‘eyes’), multiple streams of feedback can be derived, with the information forwarded as command or event packets to context pool 10.
  • Visual and aural cues aid in confirmation of recognition, delivering feedback for required motion control. These are needed, for example, to rotate and tilt the head properly and to then direct the eye yaw and tilt so the detailed center of the foviated vision is centered on the portion of the scene of interest. These matters are handled interdependently by list processor 13 and visual/aural analyzer 60.
  • The speech analyzer 60 dumps its output into the semantic analyzer 50 to actually parse spoken material into items suitable for the context pool 10 memory.
  • Obviously, many technologies for such processed sensors exist, as known by one skilled in the art. The present disclosure permits interactive presentation of template information with the sensor, in concert with the functions of this brain emulation. One skilled in the art will realize that visual analyzer 60 itself can be implemented as hard-coded logic, as a micro-coded processor, a software emulation, an embedded processor, FPGA, ASIC, optical or other technology of choice, without altering the means of this disclosure.
  • Memory Garbage Cleanup and Collection. Garbage collection refers to the reclaiming of unused fragments of memory. During this process, the fragments are sought out and objects in surrounding memory are moved up or down, coalescing unused fragments into a larger block. Coalesced blocks are remembered for later reuse.
  • Cleanup is a catch-all phrase to cover all things that need to be done to the memory to optimize it. As noted below, it is used to resize certain areas of memory to optimize usage, reclaiming previously reserved space that could better be used elsewhere.
  • Memory garbage collection and cleanup processes usually involve the movement of information in memory, with suitable updates to indices and pointers to properly reflect the movement.
  • Expansion of Relational Linkage Blocks. When a neuron originally assigned and given an ID by analyzer 30, empty area for the relationals 1252 is reserved behind the basic neuron information block 1251. Refer to FIG. 9 and FIG. 10. As new relationships are formed, relational records 1253 are appended to the end of the above linkage list. Eventually, this free space is exhausted, an there is no room to add the relational 1252, between the end of the present linkage block and the start of the next neuron. Something must be explicitly done to fix this.
  • ‘Sleep-Time° Cleanup Activity. Sleep is used to remove clutter from short-term memory, half-formed fragments of thoughts, conjectures, and certain other items of information. This process enables the next day to start out fresh, just as with a human. It is a suitable low-risk time to perform optimization of memory. During periods of ‘sleep’, the inactive state of the brain emulator can be used to advantage to handle movement of validated facts from reinforcement to long-term memory 12. This process leaves unused holes in reinforcement memory 11, which are also cleaned up.
  • During the reallocation of the neuron in long-term memory 12, or when moving a relational 1252 from reinforcement memory 11 over to the associated neuron in long-term memory 12, it is possible there is no room left for the relational 1252. For this reason, a neuron's space in long-term 12 must sometimes be expanded.
  • For this, reinforcement memory 11 is scanned to determine what neurons are eligible for transfer. If transfer would be impeded by lack of space, the associated long-term neuron memory record 1251 is resized upwards.
  • When available reinforcement 11 or long-term memory 12 has diminished below threshold, neuron space can also be resized downwards during ‘sleep’ times, to optimize it. Neurons 1250 with significant free space behind them can have some of that space reclaimed. Heuristics determine whether or not to downsize. Sparse separation of neurons 1250 in memory is always faster, so reclamation is only done if required.
  • Incoming information 93. The implementation of deference between two modeled individuals takes place in analyzer 30. The position of the present individual being modeled within a hierarchy of individual, political or institutional structures is also kept in parameters 22.
  • All information except that from the analyzer/correlator 30 first passes through the clutter filter 40, where it may simply be ignored and scrapped. Clutter filter 40 uses personality-specific parameters 22 to determine whether the composite personality is even interested in addressing the information, which has been pre-classified. For example, a Choleric temperament is likely to completely ignore human-interest information, whereas a Sanguine temperament readily devours it.
  • The filter 40 is a catch-all area to pass preliminary judgment on data, including judgment of its source. The filter 40 is controlled by a number of dynamically-changing parameters, including the current state of patience. When context pool 10 is full, filter 40 drops information, bio-mimetic to someone in the state of “mental overload.”
  • Preemptive Training. The brain emulation of this disclosure learns over time, influenced by underlying temperament. Normal human learning processes are used by the emulated brain. Nothing is retained in permanent memory 12 by the analyzer 30 unless it has been reinforced for approximately 21 days, avoiding an accumulation of ‘clutter’ facts and relationships. Facts learned are normally interpreted under the influence of the root temperament, which has its implicit filters and analytical processes (or limited analytical processes, as in the case of the Sanguine).
  • The brain emulation may be ‘trained’ by a method preempting normal temperament-and-time processes, to rapidly absorb facts, control and environmental conditions. The process is therefore described here as preemptive training. It is assumed in this case that the ‘facts’ and relationships presented are previously determined to be true and factual, “from God,” as it were.
  • Preemptive training may be turned on or off at will, externally to the emulator. It can be turned on to affect rapid training of these pristine facts and relationships, bypassing temperament-related decision steps and levels of analyzer 30 and clutter filter 40. In this training mode, access is given to state parameters 22 and controls not otherwise permitted. When training is completed, these may be returned on. The modified parameters then immediately effect the personality.
  • When in preemptive training (‘setup’) mode, the entire contents of memories, one or all, are selected or all state parameters 22 may be copied to external storage. This has application for both the commercial marketing of the information as “intellectual property”, and for military purposes as discussed elsewhere. Such ‘snapshot of being’ may be replicated elsewhere and used as the basis for additional training.
  • Facts and Relationals. Under preemptive training, new facts and preliminary relationships between them can be defined using declarative monolog in a text file, or a verbal narrative if a speech analyzer 60 is present. These are described in English prose format. The grammar is interpreted by the English Parser, but it is not filtered or further interpreted by analyzer 30 or conjector 70. Normal processes for grammar interpretation are followed, but the information undergoes no further temperament-based interpretation or filtering. This approach lets the brain emulation query the trainer for information that is unclear or not understood, and the training process becomes similar that of a knowledge-hungry human being.
  • Religious Belief and Personal Conviction. Religious beliefs and personal convictions may be established by preemptive training. As with all preemptive training, the brain emulation will have no idea of why it has these beliefs or convictions. Even so, they can be overridden by deep (extended and consistent) normal training, thereafter.
  • The beliefs are set by a prose-style description in a text file, to be read by the brain emulation. If it does not understand something or considers something illogical, it will ask for clarification by the trainer. The prose can subsequently be altered to preclude that question for the future.
  • There is nothing fundamentally different in the matter of religious belief and personal conviction over other types of facts 1251 and relationships 1252 that may be learned. However, by defining them under preemptive training, the normal analytical checks by the analyzer 30 for consistency and factual basis are bypassed, making them an integral part of the emulated brain's basis of understanding. Religious beliefs or personal convictions are established they could also be trained (non-preemptively) over extended time.
  • Specification of Control Parameter Values. The many control parameters 22 and their default values may also be preset by preemptive training. This can also include specific emotional responses to be evoked when defined conditions are met. The result is again that the brain emulation does not know why (he) responds that way, but he simply does. This is useful to preset human-like likes and dislikes for specific things, for accurate emulation of a person.
  • Preemptive training is the method by which the temperament of the brain emulation is specified, including both the base temperament type and the upper-level composite of temperaments. These settings will directly affect the outcome of responses and decisions made by this emulation.
  • The time frame over which the brain emulation learning reinforcement occurs is nominally 21 days, but defaults to somewhat different durations on a temperament-dependent basis. Table 9 gives some representative default reinforcement intervals. ‘Permanent’ learning also takes place during times of emotional stress or trauma, during which the interval of this table is proportionately decreased.
    TABLE 9
    Temperamental Learning-Reinforcement Intervals
    Temperament Duration
    Choleric 21 days
    Sanguine 18 days
    Phlegmatic 15 days
    Melancholy 21 days
  • When the time is reduced (it does not effect preemptive training), the brain emulation is more likely to retain trivia and insignificant information. After the emulation is turned operational, those presets become an intrinsic part of its responses. They define the settings from the present time onward, until altered.
  • While in preemptive training mode, memories 11, 12, and 13 and other tables may be saved to external storage, upon command. This includes facts and relationals 1251 and 1252, and relevant parameter settings 22 and 20, and their defaults. In short, anything trained can be restored to the memory it came from. One skilled in the art will realize that the methods of saving memory and parameter states are dependent upon the technology of implementation, and that variations in these methods do not materially alter the system of the present disclosure.
  • When using a brain emulation of this disclosure to model a specific person (e.g., a foreign national for military purposes), the emulation's memory and parameter settings can be “snap-shotted” to enable a simulation re-run under new conditions or parameter settings. Anything learned between the snapshot and the time of their later reloading is lost and may not be incrementally recovered and reapplied, unless it was also snap-shotted.
  • Degreed Deference. A concept that plays a necessary role in human relationships is that of deference to another person. Deference is not ‘black-and-white’, but exists by degree. Normally the human makes decisions that suit himself under the present conditions, without regard to other people. However, he/she will have particular regard (deference) to some people, such as parents, bosses, military chain of command and the like. The brain emulator uses degreed deference to emulate this implied relationship. Referring to FIG. 13, the Present-Need-to-Defer parameter 229 provides the weighting.
  • Multiple deference tables 128 may be created in memory 12, that apply in a specific context 1283 (e.g., military, political, social order, class). All deference tables are chained together using the links such 1284 and 1285. The analyzer 30 scans the deference tables to alter a tentative decision, if it conflicts with an external command, such as inferred from an imperative sentence in semantic analyzer 50.
  • Analyzer 30 seeks a deference table matching one or more active contexts of the moment, as maintained in state parameters 22. Finding one, it specifies the parameter for the rank self-identity. If the subject being measured for deference is another person, that person's ID 200 is used instead. The relational comparator 1280 makes its decision as the deference output 1282. The decision weighting 1296 is further adjusted by the present need to defer 229. Signal 1296 is then used to determine if any decision should be made at all. In this manner, the analyzer 30 defers to commands of authority it is subject to, or weights the decision outcome if the conflicting command was merely a recommendation of external authority.
  • The deference tables 128 therefore supply a realistic influence by external authority upon the brain emulation. When used in a military environment, for example, a simulation manager in charge of the brain emulator(s) can exert real-time control upon the brain emulations, if the manager's ID is placed at the top of all deference tables.
  • Preemptive training establishes the set(s) of hierarchical tables 128 for relationships between this emulator and others (or other people). The same prose-style description is used to describe the ‘chain of command’ and where the current brain emulation fits within it.
  • Establishing a down-line deference (i.e., a condition where another emulator or person should defer to this brain emulation) is permissible. It sets the emulator's expectations of that other emulator or person. Response to a violation of those expectations is dependent upon the base temperament specified for the present brain emulator, and may also be defined during preemptive training.
  • The Implementation of Temperament. Certain assumptions made by any such model of human psychological function, including this one, enable or simplify the understanding of brain functions. Properly done, they permit ready creation and implementation of a synthetic brain based on that model. They may be right, wrong or erroneous, but such assumptions permit rapid creation of a ‘baseline’ implementation. Such assumptions do not effect the overall means of this disclosure.
  • The FIG. 14 depicts one such assumption, the makeup of composite personality. The assumption is made that each person is ‘pre-wired’ at birth with a specific set of pre-dispositions, one of four basic types well known to those skilled in the state of the art. These include the Choleric, Melancholy, Sanguine and Phlegmatic temperaments, as categorized and defined among the basic tenants of classical psychology.
  • To these basic predispositions (temperaments) is added a set of experiences and training, learned from the environment in which the individual lives. The from-birth predispositions are collectively defined as a ‘base temperament’, as used here. The sum of that temperament and the set of experiences is used by the present disclosure to define the composite personality.
  • FIG. 15 depicts another assumption used by the present disclosure and model, approximate traits exhibited by the four classical temperaments. The above ‘pre-wired temperament’ 201 of FIG. 2 are replaced by the actual classical temperament names, in FIG. 15 and FIGS. 16A-D.
  • FIG. 15 illustrates typical traits (largely, but not fully) specific to one temperament type, as indicated above each temperament. FIGS. 16A-D represent the composite personalities of people, each based upon one of the four underlying predisposition temperaments.
  • Through experience and training, the personality of a given underlying set of predispositions may ‘reach out’ to intentionally assimilate desirable characteristics of the other three temperaments. The result is a broader composite personality. The individual being modeled here, a Melancholy of FIG. 16 b, for example, may embrace decisiveness or leadership traits more characteristic of a Choleric.
  • Another assumption made here simplifies the understanding of human behavior, and the implementation of this realistic brain emulator. It is that every person has one and only one basic underlying temperament, regardless of past or present experience or training. When placed under emotional or physical trauma, or under extreme pressure, the actions, behavior, interests and decisions made by the person (or emulation) tend to revert to those characteristic of the person's base temperament.
  • Obviously, other assumptions could instead be made about the origin and development of temperament and personality, ones which may be equally valid. These could be used here instead by way of examples, but do not, however, effect the present disclosure or its embodiments. The above assumptions provide a vehicle for the description of the present disclosure, and provide a means for visualizing an otherwise complex matter.
  • Weighting of Brain Parameters. FIG. 17 depicts the Choleric parameter 202 in its relationship to the Propensity-to-Decide parameter 222, noted earlier. The actual value of parameter 222 is the sum-of-products 2421 of the current values of all four temperament-controlling parameters, each with its own weight. The values of the weights 2420 applied are selected and fixed in the emulation, but the controlling temperament parameters may themselves be adjusted as desired.
  • It is desirable for one mode of operation that all of the four temperament parameters such as Choleric 202 have values of 0 or 100%, such that they are mutually exclusive. It is desirable for other modes of operation that the percentages of all four temperament parameters may be non-zero, but shall total 100% when summed. An example means to implement this is illustrated in FIG. 17.
  • It may be convenient, for example to ‘synthetically’ force the sum of percentages of the four temperament parameters to be 100%. Using weights 2420 given by the example of FIG. 17 the setting of the Propensity to Decide parameter 222 is given by the equation:
    Propensity to Decide=50%*Choleric+30%Sanguine+15%*Melancholy+3%*Phlegmatic.
  • By ignoring how the ‘pseudo-neuron’ temperament parameters are set, they may be treated as normal neurons in a neural network.
  • A useful assumption made by this disclosure is that human beings (being emulated) have a root, or base, temperament at birth that gives the human certain propensities for behavior. Experience, training and growth may cause the human to take on selective traits found predominately in one or more of the non-baseline (‘pre-wired’) temperament.
  • Implementation of Trauma. A part of this disclosure is the implementation of the human response to emotional pressure or to physical or emotional trauma. Such response is modeled here, for example, as the reduction of impact of such experience, training and growth, such that the personality temporarily is dominated by the ‘pre-wired’ temperament. This is depicted in FIG. 18.
  • In FIG. 18, the elements of FIG. 17 are augmented by a selector 241, which takes as its output either of its two inputs, one or the other in its entirety, or a percentage of each input as selected by a determining control input. In this case, the normal operation and description depicted by FIG. 4 is altered under emotional or physical trauma or extreme pressure, as noted by parameter 230.
  • In this case, selector 241 is interposed between temperament sum 2421 and the Propensity to Decide parameter 222, such that when under trauma, that decision behavior is instead determined by the ‘pre-wired’ root temperament 201. The base temperament is pre-chosen as one of the operational set-up values for the brain emulation and is presumably unchanged for ‘life’, although nothing prevents such change.
  • Trauma parameter 230 is triggered and set by sensing other parameter or neuron conditions that indicate levels of extreme emotional pressure or trauma, or physical trauma or shock, for example, trauma 230 is configured to automatically decay with time, using a linear, logarithmic rate or other rate to its nominal ‘off’ (unperturbed) state or value. It is normally triggered by a change of the above conditions and can be re-triggered if the condition is sustained or recurs, and can be designed to decay immediately if the condition is removed.
  • The conditions triggering Trauma parameter 230 are not depicted in FIG. 18, but are presumed to exist, and consist of a sum-of-products of parameters and brain nodes from whose values the trauma can be sensed.
  • Handling of Gender. The basic methods of FIG. 18 are extended to differences of activity between male and female people. For this case, processing flow is augmented with additional multiplexor and weighting tables such as 241 and 242. These would be driven by the Gender parameter 209, instead of Trauma 230, for example. Where appropriate in the decision and thought processes, these additions are incorporated to account for gender-related processing differences.
  • Use in Military or Political Simulations. Because this disclosure is capable of accurately emulating human behavior, the brain emulation finds use in many military applications. Using prior means, it is difficult to obtain accurate predictive modeling of combat force decisions, particularly those motivated by religious belief systems and belligerent political ideologies. In the present environment of asymmetric warfare, the ability to forecast combatant decisions becomes critically more important. The means of the present disclosure provide this capability. Refer to FIG. 19 and FIG. 20.
  • Brain emulator 311 as described previously can be configured to receive ‘verbal’ input in the form of a text stream 93 and to emit conversational output text 98. By the addition of a TCP/IP interface 3112, or other interface such as for the 1553 bus, the brain emulation 3110 can be network-connected to a local or remote network 312. It becomes a network-connected brain emulation 311. It should be evident to one skilled in the art that many variations of interface 3112 are possible without changing the system of the present disclosure.
  • It is now possible to configure a cluster of these emulators together to form a team. In FIG. 20, these are demonstrated as a Battleforce simulation cluster 310, such as may be used to predictively model combatant forces. The same configuration can also be applied, for example, in an Unmanned Arial Vehicle (UAV) ‘cockpit’ to emulate a conventional flight crew, each individual specifically trained on for his task role within the crew. It can likewise be applied to an unmanned underwater vehicle, to make autonomous mission decisions when disconnected from the host vessel.
  • When used as a battleforce simulation cluster, a simulation team 315 of human operators can be assigned to upload intelligence to emulators 311 to accurate emulate key individuals in the modeled battleforce. As new information becomes available on the modeled combatants, preemptive training can be used to update the models.
  • The emulations 311 used in the simulation cluster can use the port concept of the TCP/IP protocol to restrict conversations among themselves. Such specific local-communications ports can be precluded from access by other such clusters via conventional internet gateway 313. Cluster 310 can then be used to emulate an enemy combatant force (e.g., a ‘Red’ force), an unknown combatant force, coalition or friendly (e.g., ‘White’ or ‘Blue’) forces, secure from each other.
  • Multiple clusters 310 may be interconnected to form an integrated battleforce simulation system 31 as shown in FIG. 21. Simulations would be under the overall direction of a simulation director 330. The director 330 can have secure access to internal conversations within the battleforce clusters 310 by mans of a dedicated encrypted port that gateway 313 replicates and encrypts the local busses 320. This configuration permits independent simulation teams 315 to work independently of each other but under the scenario proposals and directions of the director 330.
  • The simulation director 330 can remotely take snapshots of the memory and brain parameters of all brain emulations in the system 31. By taking such periodic snapshots, the simulations can be ‘rewound’ and restarted with different scenarios, intelligence information or updated personality profiles.
  • Simulation teams 315 may preferably consist of psychologists and people with knowledge about the personalities, governments or composite forces they are responsible for emulating. This disclosure permits realistic inclusion of religious belief, moral convictions (or lack of them), chains of command and authority, and other relevant personal information required for accurate predictive modeling of people systems.
  • The simulation system 31 may be located in a local region or may be distributed across the world. Results of such simulations can be made available to the actual warfighters as a part of C4ISR.
  • Parsing of Human Language
  • Definitives Versus Declarations
  • There are many alternative organizations for the process that separates definitive sentences from declarations. This is generally controlled by the structure of structure defined in the Baccus-Nauer Format (“BNF”) file that describes the natural language (e.g., English).
  • The Language Definition
  • The parser itself is created in a top-down description of the language, and the description (a ”.BNF” file) is then translated by the Lingua compiler into a C++ class that serves as a parser. At run-time, that class parses the sentence in accordance with the language definition in the BNF file. Incoming sentences are parsed according to that definition, and the constituent parts are pushed onto a stack.
  • The BNF is written in top-down fashion, such that a sentence is defined as a Subject and a Predicate, while a Subject is a Noun Phrase, which itself is an optional ‘a/an’ determiner, a set of optional adjectives and a noun-equivalent. This process progressively defines sentence parts in more detail, and includes all realistic variations that a sentence may have.
  • The Parsing Stack
  • As parsing progresses, information from the sentence is tossed onto a stack in a first-in, first-out order. Where the parser has attempted to parse something as a Clause when in fact it is not, all information related to the (suspected) clause is discarded and later replaced by the correct data.
  • For the sake of convenience, significant portions of the sentence such as Subject, Predicate, Independent Clause and others are bracketed on the stack by begin/end markers.
  • Identifying a Definitive Sentence
  • A ‘definitive’ sentence defines something. The brain supposedly remembers the definition of a word, and possibly makes associations or relationships with it. In practice, definition of a word or topic may begin with a definitive sentence, but the definition is elaborated with declarative commentary afterwards.
  • Generally speaking, it is possible to know whether or not a sentence is a definitive (a “DEFN”) strictly from structure of its grammar. If all sentences were well-formed, it would be reasonable to identify the DEFN entirely within the BNF description of a definitive.
  • In practice that places significant and unreasonable burden on the BNF. Further, the BNF cannot identify subsequent declarative topic expansion being defined as definitive. It must be ascertained in a step to follow.
  • The parser should be as streamlined and fast as practical. Currently the majority of the process load is caused by efforts to differentiate between definitive and declarative statements. A lot of recursion occurs as one pattern match is attempted, fails, and another is tried. Additionally, other sentence types calling on these same patterns have to go through this extra recursion as well.
  • In the real world, many problems arise within us when we, as people, get “declarations” pushed into our ‘DEFN centers,’ giving ideas more import than they deserve. Racism, bigotry and hatred seem to all occur when a declaration gets handled as a definition. I think we need to be very choosy on what we let come through as definitions. IMHO, the best way to handle that would be post-parsing. As a note, I believe we would be better off erring on the DECL side by missing a DEFN. This seems to be less catastrophic than pushing a false DEFN.
  • The brain's following parsing system could be used to assist post-parsing:
      • 1. All statement patterns get pushed to the stack as declarations.
      • 2. The parser throws clues to the stack to help post parsing determine how to handle the statement. Modifiers (e.g., all, some) and determiners (a, an, the) in the subject and verb types (is, are) are primary elements useful to determine if a statement is definitive. An interface function can rule out a DEFN by checking for a set of these conditions. A token can be pushed saying: a. DECL, or b. POSSIBLE_DEFN.
      • 3. Such tokens can be pushed within each independent clause.
  • Post parsing can more readily look forward within the stack to help determine a DEFN versus DECL, because we are not restricted to any cases or sub-patterns of the statement pattern. This system is more efficient, and in the end enables us to accurately differentiate between DEFNs and DECLs.
  • Ascertaining a Declaration
  • Modifiers (e.g., all, some) and determiners (a, an, the) in the subject and verb types (is, are) are primary elements useful to determine if a statement is definitive. Absence of a direct object is also a possible indicator of a definitive sentence. The original methods devised to determine a DECL were:
    TABLE 10
    Some Conditions for Definitives
    Remarks Vb Suffix Examples
    Pres-Simple
    Forms:
    A. ITV Dogs bark.
    B. IRR-PRES Dogs unwind.
    C. Vos Gerund dogs enjoy hunting.
    E. Vos Adj Dogs act strange.
    F. Vos Noun_Ph Dogs resemble their
    owners.
    IS-BE Forms:
    A. Is_Be IRR_PPART Houses are built/broken.
    B. Is_Be IRR_PRES ers, ed Dogs are forgivers.
    C. Is_Be Noun_Ph Dogs are animals.
    D. Is_Be Adj_Ph Dogs are slimy.

    These are now replaced with the following:
  • These 3 conditions must be met for the statement to be a possible DEFN:
      • Is_Declaration: The statement must parse via the declaration pattern.
      • !Decl_Deter: This flag is set off by pronouns in the subject, demonstratives in the subject or a definite article (“the”) in the subject.
      • (Is_Be∥Pres_Vb): Is_Be indicates the verb is an Is_Be verb. Pres_Vb is set for all present verbs.
  • If all 3 of these conditions is met, we possibly have a DEFN.
      • 1. Dogs are animals.
      • 2. Parsed by the DECL pattern.
      • 3 No pronouns in the subject (she is blue), no demonstratives in the subject (that dog is blue), no definite articles (the dog is blue).
      • 4. “is”=Is_Be verb.
  • Dogs are animals.=POSS_DEFN
  • If any of these 3 conditions is not set, we have a DECL.
  • 1. dogs are animals.
    Stack + 0 BEG_CLS <CLS> 0
    Stack + 1 POSS_DEFN dogs are animals.
    Stack + 2 BEG_SUBJ <SUBJ>
    Stack + 3 T_NOUN dogs (1639)
    Stack + 4 END_SUBJ </SUBJ>
    Stack + 5 SUBJECT dogs
    Stack + 6 BEG_PRED <PRED>
    Stack + 7 T_VERB are (1000)
    Stack + 8 ACTION_PRES (action in present)
    Stack + 9 VB_PLURAL (plural)
    Stack + 10 T_NOUN animals (1626)
    Stack + 11 VB_GER animals
    Stack + 12 CONVEY_ONGOING (progressive)
    Stack + 13 ISNT_CNDX (1626)
    Stack + 14 END_PRED </PRED>
    Stack + 15 PREDICATE are animals
    Stack + 16 END_CLS </CLS> 0

    Parse-to-Neuron Mappings
  • Referring to FIG. 22, there are illustrated sample relational connections. The following example sentences are parsed (as definitive sentences) and are then used to permanently create neural relationships. The groups of boxes, the Man and the Chase neurons, are relational connections stored with the respective neuron.
  • This example shows how two sentences on the same general topic (e.g., men), defining what certain men are like. It also demonstrates what/who is known to be capable of belching (whatever that means).
  • FIG. 22 uses some of the values from Error! Reference source not found below.
  • Table of Relational Commands
    TABLE 11
    Interpretation of Relational Weights
    Neuron
    Weight Item Index Remarks
    (See Remarks) “how”
    (See Remarks) “what
    manner”
    R_ACTION Verb Used by any neuron to specify action to take if
    Neuron relationals enable. This must be the last relational
    of an AND set.
    R_ACTION_F Verb Same as R_ACTION, but terminates a list subset.
    Neuron
    R_ACTOR Noun Used by verb neuron to indicate who fired this
    Neuron relational set. This must be the last relational of
    an AND set.
    R_ACTOR_F Verb Same as R_ACTOR, but terminates a list subset.
    Neuron
    R_CDX ± 0 . . . 100% Emotion Degree of influence/coupling.: 100% is full
    Neuron suppression.
    R_CDX, ± 0 . . . 100% “regularity” 0 == Never, 50% == Sometimes, 75% == Often,
    100% = Always
    R_CDX, ± 0 . . . 100% “inclusion” 0 == None, 15% == Few, 50% == Some,
    100% == All. If the level of inclusion is 100%,
    relational is superfluous and may be removed.
    R_CDX ± 0 . . . 100% Wiring Degree of influence/coupling.: 100% is full
    Neuron suppression.
    R_DO Verb Direct Object indicator
    Neuron
    R_ELT Noun Membership in a class (e.g., on ‘dog’ for “dog is
    Neuron an animal”.
    R_GAMUT (0-31) Any Neuron Gamut table of 1-32 entries follow.
    R_IDO Verb Indirect Object Indicator
    Neuron
    R_IMPLIES ± 0 . . . 100% Any Neuron % degree of similarity to the target neuron.
    (“Dogs are animals” ==> 100%, “Cows may fly” ==> 30%).
    R_INHIBIT ± 0 . . . 100% Any Neuron % inhibition of firing target neuron, even if other
    relationals enable it.
    R_NOT Any Neuron Complements present composite conditions set.
    R_POSSN, ± 0 . . . 100% Noun % of ownership. (“Dogs mostly have hairy
    Neuron coats.”);
    R_PREP Verb Preposition (see examples below)
    Neuron
    R_PREP “when” May be BEFORE, AFTER, etc. 50% ==> now.
    R_PREP “where” May be IN, OUT, UP, DOWN, ABOVE,
    BELOW, etc.
  • Examples of Implies and Possession
  • Other examples of relationships established using the relational records of Error! Reference source not found.1 is shown in FIG. 23, particularly illustrating the R_IMPLIES (100%) case and the NOT (inhibitor) case. The linkage codes may be intermixed within either the noun or verb neuron relationals.
  • Example of Not (Negation)
  • Use of negation is primarily an ‘inversion’ operation. For example, in FIG. 24, dogs are established to be animals (via Implies), and to have ‘dogpaws’ (via Possn). The constraint is put on dogpaws (via Not) that inhibits ‘animals’ from turning on unless ‘dog’ is hardly firing.
  • That is, Not complements (subtracts from 100%) the present recognition level of ‘dog’. If we don't think the object we're looking at is a dog, i.e., the firing level for ‘dog’ is only 20%, use of a Not then inhibits ‘animal’.
  • Sleep-Time Cleanup
  • For a given neuron, there may be many sub-lists of relationals that are identical, replicates of each other learned for the same fact re-learned at a later date. There may also be sub-lists that are virtually identical, except perhaps for a relatively small difference in the weights used.
  • To condense such sub-lists and reclaim the space, a ‘background job’ can be run while the brain is sleeping or otherwise not occupied. This operation can go in and remove the redundant linkage, adjusting the weights to other neurons to a suitable compromise value.
  • The Concept of Gamut and its Use
  • Gamut is a means of expressing a range of values or a group of ordered relationals in an ordered list. It includes expression of notions such as:
      • Direction
      • Ranges of Feelings or Emotion
      • Tenses of Irregular Verbs
      • Relative Strengths or Characteristics of Adjectives or Adverbs
      • Relative Merit of Human Senses—Sight, Sound, Balance, Touch, Taste
  • In each case, a specific concept or term (e.g., color) is being enumerated, and set or list of terms expresses the known variants of that concept. For example, color may include violet/purple, blue, green, yellow or red in its gamut—and many other variants besides. Each item of terminology in the list is then assigned a relative value (it may also be an absolute measure, such as wavelength). Gamut may also be applied to certain aspects of mental state, and to emotion. For example, FIG. 25.
  • FIG. 25 expresses directness of communication along a gamut of 0-100%, and defines certain words that imply various degrees of the state of directness. It also illustrates use of idioms (‘to the point’) in the enumeration of gamut. This will be described in more detail herein below.
  • The Value of Gamut
  • The value of organizing information around gamut (where appropriate) is that nuances of expression can immediately be correctly deciphered from sentences directed at the behavioral brain model. Just as useful is that it permits the rapid construction of sentences directed at another person, sentences that contain human-like nuance.
  • In both cases, idioms may be freely used in enumerating gamut. Use of idioms gives enormously human-like appeal to the outbound part of the conversation. Idioms are word phrases whose meaning is culturally determined, a meaning different from that of the individual words used in the phrase. For example, “off the wall” does not imply removal of something from a wall. Rather, it currently means unusual or strange, in the positive sense of the terms.
  • With gamut, there is no need to algorithmically account for us of specialized words, but to simply specify that they are part of the enumerated gamut of values that something can take on.
  • As an example, consider “color.” Color is an adjective whose value ranges from deep violet at 390 nm wavelength to deep red at 710 nm wavelength. The gamut of color includes the list {deep violet, violet, blue, green, yellow, orange, red, and deep red}. Of course, when colors are mixed with varying brightness of white, or are of lower intensity than their surroundings, an additional gamut of tints may be added to the list of colors defined. For example, orange turns to brown if the intensity is reduced, or to tan if white light color is added to it.
  • As an example of the use of gamut, consider that one system generates an expression to convey some level of alertness. Suppose that that system generates a phrase as follows—“I am very (expression of alertness) about walking along this path.” This is a situation where a system may have been trained such that the presence of the path, i.e., an object associated with the system, has some prior danger associated therewith or, in the alternative, some euphoric event. This is basically a relational associated with the particular word “path” which, when associated with the particular phrase, especially with the term “walking” would raise some type of alertness, the concept of alertness having varying levels of expression that extend over a range of expressions. The alertness could range from a rather unconcerned level of alertness which would be basically inattentive, i.e., there was no concern about traversing the path, or it could be one of high alertness or urgency. The concept of alertness could be neutral with positive and negative ranges disposed thereabout. The relationship with the path would define how this is expressed by the one system. For example, the phrase could be “I am very bored about walking along this path” or it could be “I am very frightened about walking along this path.” However, the level of danger or alertness would be expressed in a percentage from a low percentage, i.e., inattentive or unconcerned to a high level, i.e., very frightened. The danger level would result in the various levels of alertness being selected between the extrema. An example of the gamut is set forth as follows for alertness:
  • alertness,n,adj(agam(“solidly asleep”=0, sleep=5, inattentive=10, “out of it”=20,\bore=30, weary=35, relax=40, docile=50, warn=55,\concern=60, apprehensive=65, fear=68, trepid=72, attentive=76,\alert=80, energy=85, urgent=90, fright=95, horrify=100))
    alertness (1036): GAMUT ADJ, 19
    CDX solidly asleep (1037)
    CDX sleep (7%) (583)
    . . .
    CDX relax (39%) (945)
    CDX Docile (803)
    CDX warn (55%) (1022)
    CDX concern (62%) (771)
    . . .
    CDX energy (84%) (817)
    CDX urgent (91%) (1019)
    CDX fright (94%) (846)
    CDX horrify (100%) (871)
  • The above noted table is associated with an alertness neuron, defined as the neuron “1036” that is defined in the system. Each neuron in the system is defined by a number which is disposed in parenthesis adjacent to a particular term. For example, the alertness neuron would have a gamut table (ordered list) associated therewith which would have, for example, a rating of 91% for the term “urgent” which is associated with the neuron “1019.” This is a forward relational whereas the urgent neuron, neuron “1019,” would have a back relational back to the alertness neuron “1036.” These gamuts are basically tables of ordered lists in the relational portion of a particular neuron which constitutes the root neuron for the gamut and these are created during population of the system. This example will be described in more detail herein below.
  • Applications
  • Many places exist in the brain model where gamut is very useful. A few such applications include:
      • Emotion—E.g., State of feeling: like I'm dead, awful, pretty bad, not too good, a bit down, okay, pretty good, good, great, awesome, fantasmic
      • Irregular Verb Tenses—E.g., Do, did, done
      • Quantifiers—E.g., None, afew, several, some, lots, a lot of, all
      • Sequences—E.g., out to the front, off to the left, off to the left, off to the rear, off to the right
      • Deference Chains—E.g., Army: private, private first class/PFC, corporal/Specialist 4, sergeant/buck sergeant, staff sergeant, sergeant first class, sergeant major, command sergeant major
      • Relative Adjectives and Adverbs—E.g., Least, a lot less, less, some, more, a lot more, most
        Many other useful applications of gamut also exist, besides.
        Implementation
  • There are many ways to implement gamut, as one skilled in the art will know. In each case, implementation should include the following:
  • List of Gamut Values in association with a root neuron, Each Independent of the Root Owner
  • Handling of Multi-word Phrases or Idioms.
  • An example of this is that color, orange, off-white and red are all commonly-used terms in contemporary English. Any of them must be parsed and recognized for what they are in a sentence, and all of them must be available for the formation of ‘out-bound’ sentences in reply. In this case, color is the ‘owner’ or root neuron of the gamut, and the names of the colors are independent terms in their own right, but all point to color, the owner.
  • Words for a brain emulation can be stored and accessed in a variety of formats, such as linear lists, sorted tables with binary search, hash tables, and the like. The exact used is less important than is the fact that once a word is looked up, immediate access is provided to additional information about that word. That is, a lookup of a word subsequently provides fast access to information about the word.
  • Idependent Access
  • Every word in the brain's present vocabulary is independent of each other, for purposes of both parsing and construction of new ‘out-bound’ sentences. Of course, ‘red’ implies a color to us, but the term color need never be looked up or referenced, for red to be used in a sentence. In this manner, access to both the root gamut owner word and the words within the gamut are essentially independent of each other.
  • Multi-Word Phrases or Idioms
  • It is appropriate to use word phrases that are culturally defined (idioms) as a part of the expression of gamut. For this reason, the word storage and lookup mechanism (‘back box’) must be able to look up idiom word-sets as well as single words. That is, the phrase, “to the point” is just as valid a means of gamut expression as is the single word “blunt”.
  • It would take a lot of work and logic to make associations for the constituent words of an idiom, welding them in some manner into a unique meaning separate from their original separate meanings. The manner of storage and retrieval should therefore make complete provision for such word-groups as a set, returning the identity of a single item (neuron) that represents the meaning of the multi-word idiom.
  • This means that multi-word handling applies both to the storage architecture and to the lookup means chosen. For lookup, the parsing of a sentence should yield a single item (value) for the entire multi-word phrase that constitutes the idiom. That is, each known idiom can have a unique neuron identified with it.
  • Association of Enumeration with Owner
  • The identification (be it a pointer, index, text name or neuron ID) of the gamut's root owner—color, in the case of preceding examples—should be stored somewhere in the data associated with each word of the gamut. Red, green, violet and yellow must each retain the ID of color with it. It must also have a marker that indicates it to be part of a gamut of values. Each color need know nothing about other members of the gamut enumeration for color, only that it is a gamut value and what the root name/identification of that gamut is. If other members must be known, that must be via the root owner's ID in some manner.
  • Owner's List of Enumerated Words
  • Unlike the members of the gamut enumeration, the gamut's root owner must maintain a list of all constituent members of the gamut in an ordered list. For example, color must contain a list of all members of the color gamut, again as a list of identification means such as the member used.
  • Example Organization of Memory for Gamut
  • Gamut may be applied to any measurable quantity, color, for example. When the current numerical value of color is known, the appropriate name or terminology to express that color can be quickly chosen by indexing into the list of gamut identifiers.
  • For sake of variety, a small random number can be added to the present numerical value and the sum used as the gamut lookup index instead. A convenient place to store the gamut tables in the brain model of this invention is in the relationals list that is associated with every neuron.
  • If memory is organized properly, all items in the list imply color, and each implies a specific quality of color. To large measure, the enumeration terminology can be used independently of the root word the gamut applies to. Violet can be recalled and used without regard to whether or not color is referenced as a word in the present sentence or context.
  • A depiction of storage layout that permits this is shown in FIG. 26, and applies to the gamut for directness illustrated in FIG. 25. Only two of the example gamut enumerations (’oblique’ and ‘inarticulate’) are shown. The same layout scheme also applies to color and its gamut enumeration.
  • Core Word+List of Expressions for that Word
  • FIG. 26 shows an example ‘gamut owner’ or ‘root’ for the gamut process. In this case, it represents a hypothetical mental state within the brain defined here as directness, meaning the directness with which the brain will communicate at the moment.
  • The directness (neuron) is looked up by name “directness” or may be otherwise known by a'priori knowledge or design. Associated with it—through training—is a set of terms that convey the gamut of directness, ranging from completely inarticulate to very blunt. Regardless of the precise terms used to enumerate the levels of gray in between these two extremes, each enumeration contains an associated value. The examples used her use a range of 0-100%.
  • The gamut table happens to be stored in the neuron area set aside for relationals, weighted connections to other neurons. The term ‘directness’ might imply the word should be given noun typage. The fact that it contains a gamut list is recorded as a type-related flag bit or code.
  • The gamut table itself contains a list of identifiers that permit access to words/neurons that define a specific level of directness. When at a later time it is desired to communicate the present mood of directness (which, for example, might have a value of 60% defined external to the neuron), it is necessary to first look up or recall the term directness. The hardware or software implementing the brain then looks for the closest entry in the table, which happens to be 70%, implying the level of directness is ‘concise’.
  • Independed Values for each Word in the Gamut List
  • The gamut table itself contains a list of identifiers that permit access to words/neurons that define a specific level of directness. The specific level of directness is recorded in the gamut table along with the identifier, and is duplicated as an internal value stored with the related enumeration word. The duplication is not crucial to implementation, but can save additional search time in many cases.
  • Independent Access of Words, Phrases or Idioms
  • All constituent elements of the gamut list—and gamut list owner—can be looked up in memory using the system of implementation chosen for the memory storage area. This means that no secondary lookups are necessary for the parsing or understanding of a word within a sentence. Because the gamut table already contains a list of identifiers for the constituent parts, no lookup is needed for any of them, because the index or identifier can directly imply an address or location in the memory where the data is contained.
  • Referring now to FIG. 27, there is illustrated a more detailed diagram of a particular root neuron 2702 associated with a particular term. This term may, in and of itself, be related to other terms not associated with a gamut. These are the relationals. They are termed R1, R2 and R3, for example. However, one of the relationals for this particular neuron 2702 is a gamut, i.e., this provides information to this particular neuron 2702 that it has many shades of meaning, which particular shades of meaning are selected through some type of gamut input from an external source. Each of the relationals, R1, R2 and R3, for example, have a relationship or a link with a particular neuron, neurons N1, N2 and N3 respectively. The gamut relational is associated with a table 2704 which is part of the neuron. This has N relationships associated therewith, which each of which gamut relationship has a percentage associated therewith from 0% on one side to 100% on the other extreme. With the gamut input in a percentage value defining which of the gamut values is to be output, this will define the relationship to one of the neurons associated with particular gamuts. For the gamuts G1 and G2, for example, there are provided neurons NG1, NG2, etc., up to neuron NG3. As an example of a particular neuron, consider the following table for an anticipation neuron. In general, gamuts are used primarily with adverbs, adjectives and verbs. The neuron anticipation is associated with two different gamut tables, one for an adjective, and one for an adverb. The word type will define which of the gamut tables is utilized. In this table, there is provided about how many gamuts are associated with a particular gamut table. In this example, there are twenty-one relationals associated with each gamut table, 21 for the adjective or type N, 21 for the adverb type. Only a portion of these are illustrated.
    anticipation (1045): GAMUT
    ADJ, 21
    CDX traumatize (999)
    CDX dread (7%) (805)
    CDX nothing good can come of this (1046)
    (13%)
    . . .
    CDX forebode (42%) (841)
    CDX trepid (42%) (1001)
    . . .
    CDX seek (81%) (545)
    CDX anticipate (84%) (729)
    CDX strongly anticipate (91%) (1048)
    CDX antsy (100%) (730)
    ADV, 21
    CDX traumatize (999)
    CDX dread (7%) (805)
    CDX nothing good can come of this (1046)
    (13%)
    . . .
    CDX nervous (33%) (917)
    CDX constrain (36%) (776)
    CDX forebode (42%) (841)
    . . .
    CDX search (81%) (961)
    CDX anticipate (84%) (729)
    CDX strongly anticipate (91%) (1048)
    CDX antsy (100%) (730)
  • As an example of the use of the gamuts, suppose that a particular word is determined to be an adjective and it is associated with the anticipation mood. Further, assume that an external source has determined that the anticipation level or mood is at a level of approximately 40%. The system would look at the input and then select as to the particular mood to assert in a sentence or otherwise to be the one of “forebode.” If a sentence were being constructed, this would be the word that this particular system utilizes. The input that defines it, i.e., the percentage input, would be one that is learned with experience. For some reason, the context of the sentence, as described above with respect to the “path” example, would define for a particular system what that percentage value would be. However, this could be different for different systems.
  • Referring now to FIG. 28, there is illustrated the interaction of two systems for this example of the use of “forebode.” With respect to FIG. 28, one system, a system 2802 generates a sentence referred to as the sentence “I have a foreboding feeling about this.” This sentence is communicated to a second system 2804. However, the system 2802 has a gamut table 2806 associated with the anticipation neuron which has all of the terms for the adjective associated therewith and which have a percentage value from the lowest level of anticipation to the highest level of anticipation. As noted herein above, this example, the foreboding gamut for anticipation is at a 42% level. The input level of anticipation is 40%. Therefore, the way this would be expressed, would be by using the term “foreboding.” This would be communicated in the sentence and the second system 2804 would review the word foreboding which would be directed toward a neuron 2810 which would have a back relational to an anticipation neuron in the system 2804 which would have its own gamut table 2812 for the adjective form of “forebode” that would have possibly a different level of anticipation associated therewith. This could be to the fact that it was populated differently, trained differently, etc. In this example, it might be that the anticipation level is 10% which means that this term “forebode” is associated with a much lower level of anticipation. Therefore, the one system 2804 would view the sentence that was constructed by the system 2802 as having a lower anticipation level than the system 2804.
  • Parsing and Gamut
  • The language parser is the device or object that reads sentences (e.g., English) and gleans purpose and intent from their structure. So-called ‘natural language’ parsers operate by breaking sentences into their constituent parts in a top-down fashion. Sentences are broken into clauses, and clauses into subject and predicate. The latter are in turn broken down into phrases, special structural words, and lowest-level parts of speech, the words themselves.
  • The words are looked up in a manner appropriate to the neural model or brain system chosen. In the case of the disclosed system, many words are specialized and treated as nerve-like interfaces to the outside world. Even with these specialized behaviors, each word is looked up by the parser in the brain's memory, and the related semi-permanent neuron ID for it is returned to the parser.
  • In the disclosed behavioral brain model, a definition of English grammar is first defined and then fed to a software tool. The software tool in turn compiles the definition of English grammar and synthesizes a complete parser for the English language. Such tools exist in the general marketplace.
  • The definition utilized in this disclosure is coded in a ‘BNF’ (Baccus-Nauer Format) file. The complexities and vagaries of the English grammar, structure and lexical format are defined in that BNF file. Should there be a problem parsing a particular sentence, it is the BNF file that must usually get corrected to handle the case.
  • For the sake of gamut, the primary requirement placed on the parser is that it be able to treat multi-word phrases and idioms as equivalent to single words. In the disclosed behavioral brain model system, each word represents a single neuron. Idioms and multi-word phrases (such as ‘to the point’) are likewise treated as single neurons.
  • With this proviso in place, it is then very easy to define gamut lists using the vernacular of the current generation, if desired. When the parser encounters any such multi-word item or idiom, it finds it as its own word-set with its own neuron ID. Such multi-word phrases or idioms can be either taught—such as by using definitive sentences—or pre-loaded to the brain from a specially formatted list appropriate as a dictionary. An appropriate place for these in the Neuric behavioral brain model is in a special file of structural English words that represent essential knowledge.
  • The Training of Gamut
  • The disclosed behavioral brain model uses a file of information formatted for easy ingestion into the brain. (There also can be provided a second specialized parser that extracts and remembers data from this file.) The file primarily contains ‘structural words’, specialized words that do not change much from generation to generation. It excludes most nouns and most verbs.
  • Verbs that remain in that initial-word training list include the irregular verbs (do/did/done), various forms of to be, and other such verbs that hold special places in English grammar. Similarly, adverbs and adjectives relating to human senses are all retained there, as are all the prepositions. Essentially, any word needed to relate to ‘the world immediately around me’ is kept there.
  • This initial-word training list is a natural place to put common idioms and gamut tables. Doing so also permits ready tweaking of relationships that are not black-and-white in English. For example, one can define a gamut of ‘how much’ that includes: none, a little, some, quite a little, quite a lot, gobs and mountain of. If ‘some’ is an indicator of ‘an okay amount’, it may not be obvious that quite a little should indicate more than ‘some’ implies. Gamut—and gamuts pre-stored in the structural words table—permit one to readily tweak the relative position and orderings of such words.
  • It is entirely acceptable that two person's opinions of the ‘proper’ order might be at odds with each other. The system of gamut permits us to readily handle that case.
  • Gamut Tables—Specifying Relative Values of a Set
  • Many words form ‘sets’ of similar items whose value lie along a scale of values. For example, adverbs expressing how often or how many might be expressed in the following way:
      • How Often: Never=0%, seldom=15%, sometimes=40%, regular=50%, often=75%, always=100%
      • How Many: None=0%, a few=15%, some=50%, a lot=60, most=75%, “almost all”=92%, all=100%
  • Certain tables that define these relationships (which are named gamut tables) can be used to define both the terminology and neurons that depict the nuances of value over the ‘gamut’, range of possible values. The same concepts and table types apply to adverbs, adjectives, irregular verb (tenses), emotion, and tables of deference (e.g., military, political or social rank, which are non weighted ordered lists).
  • Gamuts of Emotion
  • Some 250 or more separate emotions can be readily identified, some of which are mutually exclusive and some of which describe markers along a range of values (i.e., a gamut of emotions). That set of emotions has been divided into some 30+ specific emotions, each having its independent gamut for which certain values are named.
  • Other groupings or divisions of emotions could also be used without altering the concept being described here. Additionally, other emotions exist that are not reasonably described using a gamut-based enumerated set of names.
  • The value of the gamut approach is simplification of emotions into closely-related categories that the brain model can describe to an interested party. Rather than stating the percentage of emotion it feels (i.e., 0-100%, which would be silly and stilted), it can now use the conventional terminology that describes its present feeling. This also permits the use of idioms (well being or scatter-brained) to succinctly communicate nuances of emotion.
  • Finally, the gamut concept is fully compatible with the system of weighted relationals used with individual neurons in the disclosed behavioral brain model. Each root emotion can be configured to reserve 32 consecutive (preferably the first 32) relational slots to depict the name of a variant of emotion. While 32 slots is a matter of convenience, variable-length lists or other fixed-length list sizes can be used. The assignment of weight-codes or percentages for the gamut table is described herein above.
  • Such a gamut of feelings might look something like that illustrated in FIGS. 29 a and 29 b, an example of what a mental clarity emotion's mapping might look like. Like other gamuts illustrated, the choice of underlying emotion name and the terms used to describe its intensity are subject to change, tweaking and additions. The examples are intended to be illustrative and not precise, and actual values used may reasonably be quite different.
  • The ideal choices for nomenclature and would be mutually exclusive within a given emotion. The challenge is to properly identify what names are simply enumerations of an underlying emotion, and what that underlying emotion might properly be.
  • Obviously, the intensity of a given emotion could vary from 0-100%, or even −100% to +100%. While either can be preferably used, the range of 0-100% is utilized, with 50% being a nominal emotion with “nothing happening”.
  • The following table shows example gamuts of emotion. The percentage assignments happen to be loosely based on 3% increments, such that the gamut can be expressed over a range of 32 unique values. (This way, a range of 0-100% can be expressed as a value from 0-31.)
  • The nomenclature in the table names the representative emotions as ‘E_emotion-name’, where emotion-name is the root emotion being assigned a gamut of values. The suffix “,g,e” is a syntax of convenience that happens to be used in an implementation of the Neuric brain model, although other means can be used to depict the type of value being described. The numeric values given are values (in percent) that approximate the value of the root emotion for which the name applies.
      • E_acceptance,g ,e(bitterness/bitter=0, rejected=10, disapproved=15, distant=20, separated=25, suspicion=30, negative=35, lonely=40, alone=45, indifferent/indifference=50, tolerance=55, accepted/acceptance=65, friendship=70, closeness/close=75, connected=80, delighted/delight=85, approval=90, amazement=100)
      • E_alertness,g,e(“deep sleep”=0, sleepy/sleep=5, inattentive=10, “out of it”=20, boredom/bored/bore=30, weary=35, relaxed/relaxation/relax=40, docile=50, warn/warning=55, concerned=60,, apprehension/apprehensive=65, fearful=68, trepidation=72, attentive=76, alert/alertness=80, energetic/“emotional energy”=85, urgency=90, fright=95, horror=100)
      • E_amusement,g,e(dazed=0, grief=10, shocked/shock=20, “un-amused”=30, serious=35, “not funny”=40, indifferent=50, warmed=60, humored/humor=75, amused=85, mirthful/mirth=100)
      • E_anticipation,g,e(trauma=0, dread=10, frig htened/frig ht=15, warned=20, suspicious/suspicion=25, negative=30, nervous=35, constrained=39, trepidation/trepid=42, boredom/bored=45, ennui=48, commonplace/“common place”/“common-place”=50, intrepid=55, expectation=60, desirous/desire=65, optimistic=70, persistence/persistent=75, seeking/seek=80, anticipation=85, “strong anticipation”=90, antsy=100)
      • E_composure,g,e(hysterical/hysteria=0, terror=5, shock=9, suffering=12, “torn up”/“torn-up”=15, frightened/fright=19, worried/worry=22, alarmed=26, anxiety=29, agitated/agitation=32, suspicious=35, troubled=38, confused=41, cautious/caution=44, sensitive=47, okay/Ok=50, calm=55, reconciled/reconcile=60, peace=65, competence/competent=70, cheer/cheerful=75, composed/composure=80, collected=85, optimistic/optimism=90, cool=100)
        In this manner, what are commonly regarded as separate emotions or states of emotion can be readily depicted with reasonableness and surety using the gamut system.
  • In general, the concept of gamut offers a way to systematically define nuances of expression that are commonplace and in daily usage. It simplifies the organization and cross-linking of information, facts and relationships. Gamut is also a perspective and way of approaching the problem of nuance in human interaction.

Claims (11)

1. A method for emulating human cognition in electronic form, comprising:
receiving information in the form of a textual or voice input in a natural language;
parsing the received input into pre-determined phrases based on a stored set of language rules for the natural language;
determining if the parsed phrases define aspects of an environment and, if so, then creating weighting factors to the natural language that are adaptive, the created weighting factors operable to create a weighted decision based upon the natural language; and
determining if the parsed phrases constitute a query and, if so, then using the weighted factors to make a decision to the query.
2. The method of claim 1, wherein the set of rules includes fundamental parameters that underlie human personality, and represented as technology-independent values such as digital numeric, analog values, optical intensity, mechanical position, or an atomic, electron or chemical state, spin or phase.
3. The method of claim 2, further comprising a set of four temperament-specific parameters, each representing one of the four classical personality temperaments, Choleric, Melancholy, Sanguine and Phlegmatic, each being represented as a percentage which when summed represent , for example, 100%, and which take on values of 0 . . . 100%.
4. The method of claim 3, in which the temperament parameters are scaled and summed to control personality tendencies, each such represented as an additional temperament-dependent parameter.
5. The method of claim 3, wherein the temperament-dependent parameters are each applied to control decision thresholds, gains, ‘pseudo-random’ but statistical selections, filter selections, and the like, so as to control behavioral processes on the basis of temperament.
6. The method of claim 1 in which the outputs of three of the four temperaments are suppressed or reduced, under sensed conditions of extreme emotional or physical trauma, and such that the resultant personality is fundamentally determined by a single dominant temperament, The sense of trauma being established either directly or by change-of-conditions, for example, by differentiation of the condition's value.
7. The method of claim 6 in which the sense of trauma is automatically diminished in linear or logarithmic fashion, unless sustained or reinitiated by new traumatic events.
8. The method of claim 6, in which the sensed conditions of extreme emotional or physical trauma are represented by a computed value or summation, said computed value being represented by technology-independent value, said technology independent value being represented by specific parameters or emulated neurons.
9. The method of claim 3, in which the emulation has a single which is ‘pre-wired’ as at human birth, are augmented by subsequent experience and training, and result in a composite personality. The impact of such training and experience upon the emulated brain are reduced or suppressed upon sense of sudden or sustained trauma so as to cause behavior to revert to that of the ‘pre-wired’ temperament.
10. The method of claim 4, in which the scaled and summed temperament parameters are scaled, summed, inhibited or enabled by values obtained from brain nodes or parameters that represent the present conditions of the brain emulation, thereby forming an augmented composite signal, said augmented composite signal is used to alter or control decision thresholds, gains, ‘pseudo-random’ but statistical selections, filter selections, and the like, so as to further control behavioral processes on the basis of temperament.
11. The method of claim 4, in which the scaling weights are selected such that most of the temperament parameters are characteristics that apply dominantly to two of the four temperament types, and only minimally to the remaining two temperaments, and for which the impact of one of the two dominant temperaments is much greater than for the other, for example, in ratios of 50%, 35%, 10% and 5%, such that a given trait is shared in great measure by two of the temperaments, with one of them dominant, but shared only in small measure by the remaining two temperaments.
US11/758,667 1996-05-06 2007-06-05 Method for determining relationships through use of an ordered list between processing nodes in an emulated human brain Expired - Fee Related US7925492B2 (en)

Priority Applications (7)

Application Number Priority Date Filing Date Title
US11/758,667 US7925492B2 (en) 2004-01-06 2007-06-05 Method for determining relationships through use of an ordered list between processing nodes in an emulated human brain
US12/101,904 US8725493B2 (en) 2004-01-06 2008-04-11 Natural language parsing method to provide conceptual flow
US12/136,670 US20080243741A1 (en) 2004-01-06 2008-06-10 Method and apparatus for defining an artificial brain via a plurality of concept nodes connected together through predetermined relationships
US12/645,317 US8473449B2 (en) 2005-01-06 2009-12-22 Process of dialogue and discussion
US13/085,289 US9064211B2 (en) 2004-01-06 2011-04-12 Method for determining relationships through use of an ordered list between processing nodes in an emulated human brain
US13/926,860 US20140012798A1 (en) 2005-01-06 2013-06-25 Process of dialogue and discussion
US14/273,143 US20140303963A1 (en) 1996-05-06 2014-05-08 Natural language parsing method to provide conceptual flow

Applications Claiming Priority (12)

Application Number Priority Date Filing Date Title
US53464104P 2004-01-06 2004-01-06
US53449204P 2004-01-06 2004-01-06
US53465904P 2004-01-06 2004-01-06
US3045205A 2005-01-06 2005-01-06
US11/154,313 US7089218B1 (en) 2004-01-06 2005-06-16 Method for inclusion of psychological temperament in an electronic emulation of the human brain
US76444206P 2006-02-02 2006-02-02
US79016606P 2006-04-07 2006-04-07
US81129906P 2006-06-05 2006-06-05
US11/425,688 US7849034B2 (en) 2004-01-06 2006-06-21 Method of emulating human cognition in a brain model containing a plurality of electronically represented neurons
US11/670,959 US20070156625A1 (en) 2004-01-06 2007-02-02 Method for movie animation
US11/697,721 US8001067B2 (en) 2004-01-06 2007-04-07 Method for substituting an electronic emulation of the human brain into an application to replace a human
US11/758,667 US7925492B2 (en) 2004-01-06 2007-06-05 Method for determining relationships through use of an ordered list between processing nodes in an emulated human brain

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US11/697,721 Continuation-In-Part US8001067B2 (en) 1996-05-06 2007-04-07 Method for substituting an electronic emulation of the human brain into an application to replace a human

Related Child Applications (4)

Application Number Title Priority Date Filing Date
US12/101,904 Continuation-In-Part US8725493B2 (en) 1996-05-06 2008-04-11 Natural language parsing method to provide conceptual flow
US12/136,670 Continuation-In-Part US20080243741A1 (en) 2004-01-06 2008-06-10 Method and apparatus for defining an artificial brain via a plurality of concept nodes connected together through predetermined relationships
US12/645,317 Continuation-In-Part US8473449B2 (en) 2005-01-06 2009-12-22 Process of dialogue and discussion
US13/085,289 Continuation US9064211B2 (en) 2004-01-06 2011-04-12 Method for determining relationships through use of an ordered list between processing nodes in an emulated human brain

Publications (2)

Publication Number Publication Date
US20070288406A1 true US20070288406A1 (en) 2007-12-13
US7925492B2 US7925492B2 (en) 2011-04-12

Family

ID=38823087

Family Applications (2)

Application Number Title Priority Date Filing Date
US11/758,667 Expired - Fee Related US7925492B2 (en) 1996-05-06 2007-06-05 Method for determining relationships through use of an ordered list between processing nodes in an emulated human brain
US13/085,289 Expired - Fee Related US9064211B2 (en) 2004-01-06 2011-04-12 Method for determining relationships through use of an ordered list between processing nodes in an emulated human brain

Family Applications After (1)

Application Number Title Priority Date Filing Date
US13/085,289 Expired - Fee Related US9064211B2 (en) 2004-01-06 2011-04-12 Method for determining relationships through use of an ordered list between processing nodes in an emulated human brain

Country Status (1)

Country Link
US (2) US7925492B2 (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060248461A1 (en) * 2005-04-29 2006-11-02 Omron Corporation Socially intelligent agent software
US20080079965A1 (en) * 2006-09-27 2008-04-03 Andrew Jackson Method, apparatus and technique for enabling individuals to create and use color
US20100042568A1 (en) * 2004-01-06 2010-02-18 Neuric Technologies, Llc Electronic brain model with neuron reinforcement
US20100185437A1 (en) * 2005-01-06 2010-07-22 Neuric Technologies, Llc Process of dialogue and discussion
US20130172693A1 (en) * 2010-09-07 2013-07-04 Penina Ohana Lubelchick Diagnosing system for consciousness level measurement and method thereof
US9064211B2 (en) 2004-01-06 2015-06-23 Neuric Technologies, Llc Method for determining relationships through use of an ordered list between processing nodes in an emulated human brain
US20160098629A1 (en) * 2014-10-01 2016-04-07 Thalchemy Corporation Efficient and scalable systems for calculating neural network connectivity in an event-driven way
US9667786B1 (en) * 2014-10-07 2017-05-30 Ipsoft, Inc. Distributed coordinated system and process which transforms data into useful information to help a user with resolving issues
WO2017195194A1 (en) * 2016-05-11 2017-11-16 Penina Ohana Lubelchick Diagnosing system for consciousness level measurement and method thereof
US10372826B2 (en) * 2017-09-15 2019-08-06 International Business Machines Corporation Training data update
US10376197B2 (en) 2010-09-07 2019-08-13 Penina Ohana Lubelchick Diagnosing system for consciousness level measurement and method thereof
US20200218860A1 (en) * 2014-04-18 2020-07-09 Thomas A. Visel Automated comprehension of natural language via constraint-based processing

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101173556B1 (en) * 2008-12-11 2012-08-13 한국전자통신연구원 Topic map based indexing apparatus, topic map based searching apparatus, topic map based searching system and its method
US9634855B2 (en) 2010-05-13 2017-04-25 Alexander Poltorak Electronic personal interactive device that determines topics of interest using a conversational agent
US20120016661A1 (en) * 2010-07-19 2012-01-19 Eyal Pinkas System, method and device for intelligent textual conversation system
US9355110B1 (en) 2011-07-14 2016-05-31 Google Inc. Dynamic presentation of data items based on prioritized associations
US9792095B2 (en) * 2014-11-25 2017-10-17 Symbol Technologies, Llc Apparatus and method for converting a procedure manual to an automated program
CN104462065B (en) * 2014-12-15 2017-10-27 北京国双科技有限公司 The analysis method and device of event affective style
US9336481B1 (en) * 2015-02-02 2016-05-10 James Albert Ionson Organically instinct-driven simulation system and method
WO2017068414A2 (en) * 2015-10-23 2017-04-27 Siemens Medical Solutions Usa, Inc. Generating natural language representations of mental content from functional brain images
US10133736B2 (en) * 2016-11-30 2018-11-20 International Business Machines Corporation Contextual analogy resolution
US10661438B2 (en) * 2017-01-16 2020-05-26 Ants Technology (Hk) Limited Robot apparatus, methods and computer products
US11163707B2 (en) * 2018-04-23 2021-11-02 International Business Machines Corporation Virtualization in hierarchical cortical emulation frameworks

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5371807A (en) * 1992-03-20 1994-12-06 Digital Equipment Corporation Method and apparatus for text classification
US5406956A (en) * 1993-02-11 1995-04-18 Francis Luca Conte Method and apparatus for truth detection
US5687286A (en) * 1992-11-02 1997-11-11 Bar-Yam; Yaneer Neural networks with subdivision
US5918222A (en) * 1995-03-17 1999-06-29 Kabushiki Kaisha Toshiba Information disclosing apparatus and multi-modal information input/output system
US6081774A (en) * 1997-08-22 2000-06-27 Novell, Inc. Natural language information retrieval system and method
US6296368B1 (en) * 1987-10-23 2001-10-02 Mag Instrument, Inc. Rechargeable miniature flashlight
US6330537B1 (en) * 1999-08-26 2001-12-11 Matsushita Electric Industrial Co., Ltd. Automatic filtering of TV contents using speech recognition and natural language
US6353810B1 (en) * 1999-08-31 2002-03-05 Accenture Llp System, method and article of manufacture for an emotion detection system improving emotion recognition
US6415257B1 (en) * 1999-08-26 2002-07-02 Matsushita Electric Industrial Co., Ltd. System for identifying and adapting a TV-user profile by means of speech technology
US6513006B2 (en) * 1999-08-26 2003-01-28 Matsushita Electronic Industrial Co., Ltd. Automatic control of household activity using speech recognition and natural language
US6584464B1 (en) * 1999-03-19 2003-06-24 Ask Jeeves, Inc. Grammar template query system
US20030130837A1 (en) * 2001-07-31 2003-07-10 Leonid Batchilo Computer based summarization of natural language documents
US6601026B2 (en) * 1999-09-17 2003-07-29 Discern Communications, Inc. Information retrieval by natural language querying
US20040181427A1 (en) * 1999-02-05 2004-09-16 Stobbs Gregory A. Computer-implemented patent portfolio analysis method and apparatus
US20040193420A1 (en) * 2002-07-15 2004-09-30 Kennewick Robert A. Mobile systems and methods for responding to natural language speech utterance
US6826568B2 (en) * 2001-12-20 2004-11-30 Microsoft Corporation Methods and system for model matching
US20040243568A1 (en) * 2000-08-24 2004-12-02 Hai-Feng Wang Search engine with natural language-based robust parsing of user query and relevance feedback learning
US20050010416A1 (en) * 2003-07-09 2005-01-13 Gensym Corporation System and method for self management of health using natural language interface

Family Cites Families (84)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0247163B1 (en) 1985-11-27 1992-02-05 Trustees Of Boston University Pattern encoding system
US4884972A (en) 1986-11-26 1989-12-05 Bright Star Technology, Inc. Speech synchronized animation
US5253328A (en) 1989-11-17 1993-10-12 Microelectronics & Computer Technology Corp. Neural-network content-addressable memory
US5170381A (en) 1989-11-22 1992-12-08 Eldon Taylor Method for mixing audio subliminal recordings
US5325464A (en) 1990-05-22 1994-06-28 International Business Machines Corporation Pyramid learning architecture neurocomputer
US5179631A (en) 1990-09-27 1993-01-12 Board Of Supervisors Of Louisiana State University And Agricultural And Mechanical College Neural network logic system
US5214715A (en) 1991-01-31 1993-05-25 Trustees Of Boston University Predictive self-organizing neural network
CA2040903C (en) 1991-04-22 2003-10-07 John G. Sutherland Neural networks
JP3369213B2 (en) 1991-05-08 2003-01-20 キャタピラー インコーポレイテッド Processor and processing element for use in neuron networks
GB9113553D0 (en) 1991-06-21 1991-08-14 Univ London Neural network architecture
US5649065A (en) 1993-05-28 1997-07-15 Maryland Technology Corporation Optimal filtering by neural networks with range extenders and/or reducers
US5412256A (en) 1994-01-06 1995-05-02 Bell Communications Research, Inc. Neuron for use in self-learning neural network
US5845271A (en) 1996-01-26 1998-12-01 Thaler; Stephen L. Non-algorithmically implemented artificial neural networks and components thereof
US20040243529A1 (en) 1996-03-25 2004-12-02 Stoneman Martin L. Machine computational-processing systems for simulated-humanoid autonomous decision systems
US20030110026A1 (en) 1996-04-23 2003-06-12 Minoru Yamamoto Systems and methods for communicating through computer animated images
US5920852A (en) 1996-04-30 1999-07-06 Grannet Corporation Large memory storage and retrieval (LAMSTAR) network
US8725493B2 (en) 2004-01-06 2014-05-13 Neuric Llc Natural language parsing method to provide conceptual flow
US8001067B2 (en) 2004-01-06 2011-08-16 Neuric Technologies, Llc Method for substituting an electronic emulation of the human brain into an application to replace a human
US7925492B2 (en) 2004-01-06 2011-04-12 Neuric Technologies, L.L.C. Method for determining relationships through use of an ordered list between processing nodes in an emulated human brain
US7089218B1 (en) 2004-01-06 2006-08-08 Neuric Technologies, Llc Method for inclusion of psychological temperament in an electronic emulation of the human brain
US20070156625A1 (en) 2004-01-06 2007-07-05 Neuric Technologies, Llc Method for movie animation
US5995651A (en) 1996-07-11 1999-11-30 Duke University Image content classification methods, systems and computer programs using texture patterns
JPH10289006A (en) 1997-04-11 1998-10-27 Yamaha Motor Co Ltd Method for controlling object to be controlled using artificial emotion
US6269368B1 (en) 1997-10-17 2001-07-31 Textwise Llc Information retrieval using dynamic evidence combination
US6778970B2 (en) 1998-05-28 2004-08-17 Lawrence Au Topological methods to organize semantic network data flows for conversational applications
US6507828B1 (en) 1998-06-19 2003-01-14 Jason Leonard Neuron circuit and related techniques
US6108619A (en) 1998-07-02 2000-08-22 Novell, Inc. Method and apparatus for semantic characterization of general content streams and repositories
US7152031B1 (en) 2000-02-25 2006-12-19 Novell, Inc. Construction, manipulation, and comparison of a multi-dimensional semantic space
US7197451B1 (en) 1998-07-02 2007-03-27 Novell, Inc. Method and mechanism for the creation, maintenance, and comparison of semantic abstracts
US6249780B1 (en) 1998-08-06 2001-06-19 Yamaha Hatsudoki Kabushiki Kaisha Control system for controlling object using pseudo-emotions and pseudo-personality generated in the object
US6230111B1 (en) 1998-08-06 2001-05-08 Yamaha Hatsudoki Kabushiki Kaisha Control system for controlling object using pseudo-emotions and pseudo-personality generated in the object
US6405199B1 (en) 1998-10-30 2002-06-11 Novell, Inc. Method and apparatus for semantic token generation based on marked phrases in a content stream
EP1006452A2 (en) 1998-12-02 2000-06-07 Pitney Bowes Inc. A method and system for creating a rules-based document
US6842877B2 (en) 1998-12-18 2005-01-11 Tangis Corporation Contextual responses based on automated learning techniques
JP2000259597A (en) 1999-03-08 2000-09-22 Inst Of Physical & Chemical Res Brain type computer system and storage system applicable to the same
US6611841B1 (en) 1999-04-02 2003-08-26 Abstract Productions, Inc. Knowledge acquisition and retrieval apparatus and method
US6453315B1 (en) * 1999-09-22 2002-09-17 Applied Semantics, Inc. Meaning-based information organization and retrieval
US7392185B2 (en) 1999-11-12 2008-06-24 Phoenix Solutions, Inc. Speech based learning/training system using semantic decoding
US6317700B1 (en) 1999-12-22 2001-11-13 Curtis A. Bagne Computational method and system to perform empirical induction
US20040205035A1 (en) 2000-05-01 2004-10-14 Norbert Rimoux Method and system for adaptive learning and pattern recognition
US6999963B1 (en) * 2000-05-03 2006-02-14 Microsoft Corporation Methods, apparatus, and data structures for annotating a database design schema and/or indexing annotations
US7286977B1 (en) 2000-09-05 2007-10-23 Novell, Inc. Intentional-stance characterization of a general content stream or repository
US7653530B2 (en) 2000-07-13 2010-01-26 Novell, Inc. Method and mechanism for the creation, maintenance, and comparison of semantic abstracts
US7389225B1 (en) 2000-10-18 2008-06-17 Novell, Inc. Method and mechanism for superpositioning state vectors in a semantic abstract
US6950104B1 (en) 2000-08-30 2005-09-27 Microsoft Corporation Methods and systems for animating facial features, and methods and systems for expression transformation
TWI221574B (en) 2000-09-13 2004-10-01 Agi Inc Sentiment sensing method, perception generation method and device thereof and software
US6795808B1 (en) 2000-10-30 2004-09-21 Koninklijke Philips Electronics N.V. User interface/entertainment device that simulates personal interaction and charges external database with relevant data
US6731307B1 (en) 2000-10-30 2004-05-04 Koninklije Philips Electronics N.V. User interface/entertainment device that simulates personal interaction and responds to user's mental state and/or personality
US20040006566A1 (en) 2000-11-07 2004-01-08 Matt Taylor System and method for augmenting knowledge commerce
US20020087346A1 (en) 2000-11-28 2002-07-04 Harkey Scott T. Utilization of competencies as drivers in a learning network
TWI236610B (en) 2000-12-06 2005-07-21 Sony Corp Robotic creature device
EP1256937B1 (en) 2001-05-11 2006-11-02 Sony France S.A. Emotion recognition method and device
EP1262844A1 (en) 2001-06-01 2002-12-04 Sony International (Europe) GmbH Method for controlling a man-machine-interface unit
GB0113587D0 (en) 2001-06-04 2001-07-25 Hewlett Packard Co Speech synthesis apparatus
US20030093280A1 (en) 2001-07-13 2003-05-15 Pierre-Yves Oudeyer Method and apparatus for synthesising an emotion conveyed on a sound
US7577631B2 (en) 2001-09-10 2009-08-18 Feldhake Michael J Cognitive image filtering
US7747549B2 (en) 2001-09-25 2010-06-29 Rikan Long-term memory neural network modeling memory-chaining functions of the brain wherein a pointer holds information about mutually related neurons and neurons are classified hierarchically by degree of activation
US20030101151A1 (en) 2001-11-26 2003-05-29 Holland Wilson Lee Universal artificial intelligence software program
US20030103053A1 (en) 2001-12-03 2003-06-05 Stephany Thomas M. Method for creating photo-realistic animation that expresses a plurality of expressions
CN100509308C (en) 2002-03-15 2009-07-08 索尼公司 Robot behavior control system, behavior control method, and robot device
US7249117B2 (en) 2002-05-22 2007-07-24 Estes Timothy W Knowledge discovery agent system and method
US20030234871A1 (en) 2002-06-25 2003-12-25 Squilla John R. Apparatus and method of modifying a portrait image
US20040054636A1 (en) 2002-07-16 2004-03-18 Cognita, Inc. Self-organizing neural mapper
EP1579415A4 (en) 2002-09-09 2006-04-19 Ingeeni Studios Inc Artificial intelligence platform
JP4204839B2 (en) 2002-10-04 2009-01-07 株式会社エイ・ジー・アイ Idea model device, spontaneous emotion model device, idea simulation method, spontaneous emotion simulation method, and program
WO2004049306A1 (en) * 2002-11-22 2004-06-10 Roy Rosser Autonomous response engine
US20040138936A1 (en) 2003-01-09 2004-07-15 Johnson Christopher D. Performing what-if forecasts using a business information and decisioning control system
US20040186743A1 (en) 2003-01-27 2004-09-23 Angel Cordero System, method and software for individuals to experience an interview simulation and to develop career and interview skills
JP2004237392A (en) 2003-02-05 2004-08-26 Sony Corp Robotic device and expression method of robotic device
US7707135B2 (en) 2003-03-04 2010-04-27 Kurzweil Technologies, Inc. Enhanced artificial intelligence language
US7113848B2 (en) 2003-06-09 2006-09-26 Hanson David F Human emulation robot system
MY138544A (en) 2003-06-26 2009-06-30 Neuramatix Sdn Bhd Neural networks with learning and expression capability
US7480662B2 (en) 2003-07-03 2009-01-20 Oracle International Corporation Fact table storage in a decision support system environment
US7426501B2 (en) 2003-07-18 2008-09-16 Knowntech, Llc Nanotechnology neural network methods and systems
JP2005044330A (en) 2003-07-24 2005-02-17 Univ Of California San Diego Weak hypothesis generation device and method, learning device and method, detection device and method, expression learning device and method, expression recognition device and method, and robot device
US7725419B2 (en) 2003-09-05 2010-05-25 Samsung Electronics Co., Ltd Proactive user interface including emotional agent
US20080243741A1 (en) 2004-01-06 2008-10-02 Neuric Technologies, Llc Method and apparatus for defining an artificial brain via a plurality of concept nodes connected together through predetermined relationships
JP2005193331A (en) 2004-01-06 2005-07-21 Sony Corp Robot device and its emotional expression method
US20070250464A1 (en) 2004-01-06 2007-10-25 Neuric Technologies, Llc Historical figures in today's society
JP4244812B2 (en) 2004-01-16 2009-03-25 ソニー株式会社 Action control system and action control method for robot apparatus
US7584099B2 (en) 2005-04-06 2009-09-01 Motorola, Inc. Method and system for interpreting verbal inputs in multimodal dialog system
US7734562B1 (en) 2005-12-30 2010-06-08 Brainpool, Inc. Voice to text conversion with keyword parse and match to semantic and transactional concepts stored in a brain pool state machine using word distance to generate character model interaction in a plurality of dramatic modes
EP1969591A1 (en) * 2006-01-05 2008-09-17 Thomas A. Visel A method for inclusion of psychological temperament in an electornic emulation of the human brain
US20100088262A1 (en) 2008-09-29 2010-04-08 Neuric Technologies, Llc Emulated brain

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6296368B1 (en) * 1987-10-23 2001-10-02 Mag Instrument, Inc. Rechargeable miniature flashlight
US5371807A (en) * 1992-03-20 1994-12-06 Digital Equipment Corporation Method and apparatus for text classification
US5687286A (en) * 1992-11-02 1997-11-11 Bar-Yam; Yaneer Neural networks with subdivision
US5406956A (en) * 1993-02-11 1995-04-18 Francis Luca Conte Method and apparatus for truth detection
US5918222A (en) * 1995-03-17 1999-06-29 Kabushiki Kaisha Toshiba Information disclosing apparatus and multi-modal information input/output system
US6081774A (en) * 1997-08-22 2000-06-27 Novell, Inc. Natural language information retrieval system and method
US20040181427A1 (en) * 1999-02-05 2004-09-16 Stobbs Gregory A. Computer-implemented patent portfolio analysis method and apparatus
US6584464B1 (en) * 1999-03-19 2003-06-24 Ask Jeeves, Inc. Grammar template query system
US6415257B1 (en) * 1999-08-26 2002-07-02 Matsushita Electric Industrial Co., Ltd. System for identifying and adapting a TV-user profile by means of speech technology
US6513006B2 (en) * 1999-08-26 2003-01-28 Matsushita Electronic Industrial Co., Ltd. Automatic control of household activity using speech recognition and natural language
US6330537B1 (en) * 1999-08-26 2001-12-11 Matsushita Electric Industrial Co., Ltd. Automatic filtering of TV contents using speech recognition and natural language
US6353810B1 (en) * 1999-08-31 2002-03-05 Accenture Llp System, method and article of manufacture for an emotion detection system improving emotion recognition
US6601026B2 (en) * 1999-09-17 2003-07-29 Discern Communications, Inc. Information retrieval by natural language querying
US20040243568A1 (en) * 2000-08-24 2004-12-02 Hai-Feng Wang Search engine with natural language-based robust parsing of user query and relevance feedback learning
US20030130837A1 (en) * 2001-07-31 2003-07-10 Leonid Batchilo Computer based summarization of natural language documents
US6826568B2 (en) * 2001-12-20 2004-11-30 Microsoft Corporation Methods and system for model matching
US20040193420A1 (en) * 2002-07-15 2004-09-30 Kennewick Robert A. Mobile systems and methods for responding to natural language speech utterance
US20050010416A1 (en) * 2003-07-09 2005-01-13 Gensym Corporation System and method for self management of health using natural language interface

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9213936B2 (en) 2004-01-06 2015-12-15 Neuric, Llc Electronic brain model with neuron tables
US20100042568A1 (en) * 2004-01-06 2010-02-18 Neuric Technologies, Llc Electronic brain model with neuron reinforcement
US9064211B2 (en) 2004-01-06 2015-06-23 Neuric Technologies, Llc Method for determining relationships through use of an ordered list between processing nodes in an emulated human brain
US20100185437A1 (en) * 2005-01-06 2010-07-22 Neuric Technologies, Llc Process of dialogue and discussion
US8473449B2 (en) 2005-01-06 2013-06-25 Neuric Technologies, Llc Process of dialogue and discussion
US20060248461A1 (en) * 2005-04-29 2006-11-02 Omron Corporation Socially intelligent agent software
US20080079965A1 (en) * 2006-09-27 2008-04-03 Andrew Jackson Method, apparatus and technique for enabling individuals to create and use color
US8405868B2 (en) * 2006-09-27 2013-03-26 Andrew Jackson Method, apparatus and technique for enabling individuals to create and use color
US20130172693A1 (en) * 2010-09-07 2013-07-04 Penina Ohana Lubelchick Diagnosing system for consciousness level measurement and method thereof
US10376197B2 (en) 2010-09-07 2019-08-13 Penina Ohana Lubelchick Diagnosing system for consciousness level measurement and method thereof
US10820851B2 (en) 2010-09-07 2020-11-03 Penina Ohana Lubelchick Diagnosing system for consciousness level measurement and method thereof
US11687722B2 (en) * 2014-04-18 2023-06-27 Thomas A. Visel Automated comprehension of natural language via constraint-based processing
US20200218860A1 (en) * 2014-04-18 2020-07-09 Thomas A. Visel Automated comprehension of natural language via constraint-based processing
WO2016054441A1 (en) * 2014-10-01 2016-04-07 Thalchemy Corporation Efficient and scalable systems for calculating neural network connectivity in an event-driven way
US10339439B2 (en) * 2014-10-01 2019-07-02 Thalchemy Corporation Efficient and scalable systems for calculating neural network connectivity in an event-driven way
US20160098629A1 (en) * 2014-10-01 2016-04-07 Thalchemy Corporation Efficient and scalable systems for calculating neural network connectivity in an event-driven way
US9667786B1 (en) * 2014-10-07 2017-05-30 Ipsoft, Inc. Distributed coordinated system and process which transforms data into useful information to help a user with resolving issues
CN109475301A (en) * 2016-05-11 2019-03-15 佩尼纳·奥哈纳·卢贝尔奇克 Diagnostic system and its method for level of consciousness measurement
WO2017195194A1 (en) * 2016-05-11 2017-11-16 Penina Ohana Lubelchick Diagnosing system for consciousness level measurement and method thereof
US10372826B2 (en) * 2017-09-15 2019-08-06 International Business Machines Corporation Training data update
US10387572B2 (en) * 2017-09-15 2019-08-20 International Business Machines Corporation Training data update
US10614269B2 (en) * 2017-09-15 2020-04-07 International Business Machines Corporation Training data update
US10621284B2 (en) * 2017-09-15 2020-04-14 International Business Machines Corporation Training data update

Also Published As

Publication number Publication date
US9064211B2 (en) 2015-06-23
US7925492B2 (en) 2011-04-12
US20120185234A1 (en) 2012-07-19

Similar Documents

Publication Publication Date Title
US9064211B2 (en) Method for determining relationships through use of an ordered list between processing nodes in an emulated human brain
US9213936B2 (en) Electronic brain model with neuron tables
US20070250464A1 (en) Historical figures in today&#39;s society
US20070156625A1 (en) Method for movie animation
US8001067B2 (en) Method for substituting an electronic emulation of the human brain into an application to replace a human
Blouw et al. Concepts as semantic pointers: A framework and computational model
Rogers et al. Parallel distributed processing at 25: Further explorations in the microstructure of cognition
Munakata et al. Connectionist models of development
Bickerton Language and species
US20140046891A1 (en) Sapient or Sentient Artificial Intelligence
Kapatsinski Changing minds changing tools: From learning theory to language acquisition to language change
EP1969591A1 (en) A method for inclusion of psychological temperament in an electornic emulation of the human brain
Duch Intuition, insight, imagination and creativity
Ellis Frequency-based grammar and the acquisition of tense and aspect in L2 learning
Bechtel The case for connectionism
Ma et al. Deconstructing legal text_Object oriented design in legal adjudication
WO2007092795A2 (en) Method for movie animation
Marrville Gender and dominance in action: World view and emotional affect in language processing and use
Wang et al. How internal neurons represent the short context: an emergent perspective
Chater et al. Computational models of psycholinguistics
Rispoli Rethinking innateness
Edelman et al. Learning as formation of low-dimensional representation spaces
Sangster Follow the Signs: Archetypes of consciousness embodied in the signs of language
Plunkett Parallel distributed processing
MacDonald Lexical representations and sentence processing

Legal Events

Date Code Title Description
AS Assignment

Owner name: NEURIC TECHNOLOGIES, L.L.C., TEXAS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:VISEL, THOMAS A.;REEL/FRAME:019737/0264

Effective date: 20070717

STCF Information on status: patent grant

Free format text: PATENTED CASE

FPAY Fee payment

Year of fee payment: 4

FEPP Fee payment procedure

Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

LAPS Lapse for failure to pay maintenance fees

Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20190412