US20020083424A1 - Systems for analyzing and computing data items - Google Patents
Systems for analyzing and computing data items Download PDFInfo
- Publication number
- US20020083424A1 US20020083424A1 US09/989,098 US98909801A US2002083424A1 US 20020083424 A1 US20020083424 A1 US 20020083424A1 US 98909801 A US98909801 A US 98909801A US 2002083424 A1 US2002083424 A1 US 2002083424A1
- Authority
- US
- United States
- Prior art keywords
- training
- node
- data
- nodes
- terminal
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/10—File systems; File servers
- G06F16/18—File system types
- G06F16/1858—Parallel file systems, i.e. file systems supporting multiple processors
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/243—Classification techniques relating to the number of classes
- G06F18/24323—Tree-organised classifiers
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/94—Hardware or software architectures specially adapted for image or video understanding
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y10—TECHNICAL SUBJECTS COVERED BY FORMER USPC
- Y10S—TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y10S707/00—Data processing: database and file management or data structures
- Y10S707/99931—Database or file accessing
- Y10S707/99933—Query processing, i.e. searching
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y10—TECHNICAL SUBJECTS COVERED BY FORMER USPC
- Y10S—TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y10S707/00—Data processing: database and file management or data structures
- Y10S707/99931—Database or file accessing
- Y10S707/99933—Query processing, i.e. searching
- Y10S707/99936—Pattern matching access
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y10—TECHNICAL SUBJECTS COVERED BY FORMER USPC
- Y10S—TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y10S707/00—Data processing: database and file management or data structures
- Y10S707/99931—Database or file accessing
- Y10S707/99938—Concurrency, e.g. lock management in shared database
Definitions
- the present invention relates to computer systems for analyzing, and computing with, sets of data, such as, for example, extremely large data sets.
- data base mining For example, one common uses of data base mining is for corporations to search through data bases containing records of millions of customers or potential customers, looking for data patterns indicating which of those customers are sufficiently likely to buy a given product to justify the cost of selecting them as targets of a direct marketing campaign. In such searches, not only are millions of records searched, but hundreds, or even thousands of fields within each record. Such data base mining has proven much more successful in selecting which customers are most likely to be interested in a given new product than prior methods.
- data base mining can be used for scanning vast numbers of medical records to look for subtle patterns associated with disease; for scanning large numbers of financial transactions to look for behavior likely to be fraudulent; or to study scientific records to look for new casual relationships.
- Neural nets are a well known device for automatically selecting which patterns of values in certain source fields of records are likely to be associated with desired values in one or more target fields.
- a neural network normally includes an input layer comprised of a plurality of input nodes, an output layer of one or more output nodes, and, in hidden-layer networks, one or more so-called hidden layers, each comprised of one or more nodes. hidden layer are hidden in the sense that they do not connect directly to any inputs or outputs.
- each node in the input layer or hidden layer contains a weight associated with its connection with each node in the next layer.
- each node in the input layer has a separate weight for its connection to each node in the hidden layer
- each node in the hidden layer has a separate weight for its connection to each node in the output layer.
- the value supplied to each given node in a given layer is supplied to each individual node in the successive layer, multiplied by the weight representing the connection between the given node and the individual node in the successive layer.
- Each node receiving such values generates an output, which is a function of the sum of the values supplied it.
- the output is a non-linear function of the sum of values supplied to the node, such as a sigmoid function.
- the sigmoid function has the effect of making the output operate like an on-off switch whose output varies rapidly from a substantially “off value to a substantially “on” value as the sum of the values supplied to the node crosses a small threshold region.
- a common way for training the weights of a neural network is to take each record in a training set and apply the value of each of its source fields to a corresponding input of the net.
- the network's weights are then modified to decrease the difference between the resulting values generated at the network's one or more outputs and the actual values for the outputs corresponding target fields in the record.
- weight modifications including back propagation, conjugate gradient, and quick propagation.
- the training process is normally repeated multiple times for all the training records until the sum of the difference between the generated and actual outputs approaches a relative minimum.
- One of the problems with neural nets is that the amount of time to appropriately train them to recognize all of the possible source field patterns associated with desired target field values goes up very rapidly as the number of source or target fields does, and as the number of different types of source patterns which might be associated with a desired target does. Even with large parallel computer systems the amount of time required to properly train such networks to learn such complex sets of patterns is often prohibitive.
- each non-terminal node is a two layer network, which trains much more rapidly than a hidden-layer network.
- the data applied to each nonterminal node is used to train up the node's neural net. This is done in a training process which applies the source fields used in the overall classification process to the input nodes of the net and the one or more target fields used in that classification process to the output the two layer net.
- the data objects are split between the node's child nodes based on whether the one or more sigmoidal output of the trained net is “on” or “off” for each such data object.
- the data object reaching the tree's terminal, or leaf, nodes are considered classified by the identity of the particular leaf node they reached.
- Such neural tree networks have the advantage of training much more rapidly than traditional neural networks, particularly when dealing with large complex classification tasks. However, they are not as discriminating as might be desired.
- a computer system with P processors receives data objects having N parameters. It divides an N-dimensional data space defined by the N parameters into M sub-spaces, where M is greater than or equal to P. This is done in such a manner that the boundaries between the resulting sub-spaces need not be orthogonal to the Ndimensions.
- the system associates a different set of one or more sub-spaces with each of the P processors. It distributes data objects located in each sub-space to the sub-space's associated processor and causes each processor to perform a computational process on each of the data objects distributed to it.
- a computer system with P processors receives set of data objects to be processed.
- a decision tree partitions the data set into at least M data sub-sets, where M is equal or greater than P.
- a different set of one or more of the sub-sets is associated with each processor, and the data objects in each sub-set are sent to the associated processor for processing.
- the process of using a decision tree to partition the data set is performed on fewer than P processors.
- the decision criteria of the non-terminal nodes of the decision tree are trained on the data set, in a process where each non-terminal node both trains on and then divides between its children the data supplied to it.
- the non-terminal nodes are neural nets having hidden layers.
- the decision criteria of the non-terminal nets can be automatically set to achieve a desired ratio between the number of data objects sent to each of such node's child nodes.
- the system automatically configures the decision tree to have a number of leaf nodes which is an integer multiple of the number P of processors.
- a computer system divides an N-dimensional data space, having a separate dimension for each of N parameters associated with the data set, into M sub-spaces. It associates each of these M sub-spaces with a corresponding one of M hidden-layer neural networks, and uses the data objects in each of the M sub-spaces to train that sub-space's associated hidden-layer neural network. The resulting divisions need not be orthogonal to the N dimensions of the space.
- a computer system creates a decision tree having a neural network for each of its nodes, including a hidden-layer network for each of its terminal, or leaf, nodes.
- Each of the tree's non-terminal nodes use the portion of the training data which is supplied to it to train its associated neural network and then uses that neural network, once trained, to determining which of the training data object supplied to it should be supplied to each of its child nodes.
- the net in each non-terminal node is trained to divide an N-dimensional space defined by parameters from the training data set into sub-spaces, and the data objects associated with each sub-space are routed to a different one of that non-terminal node's child nodes.
- each non-terminal node can be a two layer neural networks which defines a single vector of weights in the N-dimensional space, and the data space is split by a plane perpendicular to that vector.
- the portion of the training set supplied by the decision tree to each of its terminal, or leaf, nodes is used to train that node's corresponding neural network.
- different leaf node networks are trained on different processors.
- a copy of the entire decision tree, including the neural networks in both its non-terminal and leaf nodes is stored on each of a plurality of processors.
- a set of new data objects is split into separate data partitions, one for each of such processor.
- data objects from the partition associated with each processor are passed down through the copy of the complete decision tree stored on that processor. This causes each such data object to be routed to a given leaf node of the tree, at which point the hidden-layer neural network associated with the given leaf node will analyze the data object, such as by classifying it, or recording an estimated value for each of its target fields.
- a neural net tree has hidden-layer neural networks in it non-terminal nodes.
- a computer system includes a neural network, such as one in the nodes of one of the above mentioned decision trees, which automatically causes a selected percent of data objects applied to the neural network to be selected for a given purpose.
- FIG. 1 is a schematic representation of one type of parallel computing system which can be used with the present invention
- FIG. 2 is a schematic representation of the BuildModel process for training a neural tree network which embodies the present invention
- FIGS. 3 illustrates BuildModel-Master, a simplified pseudo-code representation of the process run on one processor to train the non-terminal nodes of the neural tree network as part of the training process shown in FIG. 2;
- FIG. 4 is a schematic representation of a data space defined by a portion of a training data set supplied to a non-terminal node of the neural tree network shown in FIG. 2, and of the selection of a parameter whose values have the greatest spread in that portion of the data set;
- FIG. 5 is a schematic representation of the process of training a non-terminal node in the tree of FIG. 2;
- FIG. 6 is a schematic representation of the vector of weights defined by the training process of FIG. 5;
- FIG. 7 is a schematic representation of the vector of FIG. 6 shown in the spatial coordinates of FIG. 4;
- FIG. 8 is a schematic representation of the process of using the neural net of a nonterminal node of the tree shown in FIG. 2 to split the training data object supplied to it between that node's two child nodes;
- FIG. 9 is a schematic representation of the decision process shown in FIG. 8 represented in the spatial coordinates of FIGS. 4 and 7;
- FIG. 10 is a schematic representation of the data space and data points of FIG. 4, as split by the process of FIGS. 8 and 9 into two sub-spaces;
- FIG. 11 is a schematic representation of the data space of FIG. 10, indicating that the processes of FIGS. 5 - 9 are separately applied to each of sub-spaces shown in FIG. 10.
- FIG. 12 is a schematic representation of the space of data points of FIG. 10, with each of the two sub-spaces shown in FIG. 10 having been sub-divided into two sub-sub-spaces.
- FIG. 13 illustrates BuildModel_Slave, a simplified pseudo-code representation of the process run on each of a plurality of processors to train the hidden-layer neural networks associated with the leaf nodes of the neural tree network shown in FIG. 2;
- FIG. 14 is a schematic representation of the ApplyModel process in which a large Apply data set is partitioned, and each separate partition is run through a copy of the neural tree network trained in FIG. 2 on a separate processor;
- FIG. 15 is a schematic representation of the copy of the neural tree network contained in each processor in FIG. 14, and of the data records passing through that tree;
- FIG. 16 illustrates ApplyModel_Master, a simplified pseudo-code representation of the process run on a single processor to control the ApplyModel process shown schematically in FIG. 14;
- FIG. 17 illustrates ApplyModel_Slave, a simplified pseudo-code representation of the process run on each of a plurality of separate processor nodes in the ApplyModel process shown in FIGS. 14 and 15;
- FIG. 18 is a schematic representation of the ApplyModel process when it is supplied with un-partitioned data
- FIG. 19 is a schematic representation of the ApplyModel process when it is used in conjunction with another computational process which supplies it with data that is already partitioned;
- FIG. 20 illustrates an alternate embodiment of the ApplyModel process in which the neural tree network includes hidden-layer networks in its non-terminal nodes.
- FIG. 1 shows one type of parallel computer system 50 which can be used to create an embodiment of the present invention.
- each of eight processors 52 are connected together through a high speed computer network 54 .
- a workstation 56 which enables a user to control the system and to receive selective output from it.
- Each processor 52 includes a central processing unit, or CPU, 58 , which executes instructions stored in, and reads and writes data from and to, the random access memory, or RAM 60 .
- a network interface 62 performs the function of reading and writing data over the network between processors.
- a disk interface 64 enables each processor to read and write data to one or more hard disks 66 connected to each processor.
- the computer programs and data structures described in the following application are stored in one or more of the random access memories 60 or hard disks 66 , and are executed or manipulated by one or more of the processors' CPUs.
- the BuildModel_Master program code 89 and the neural tree network data structure 70 are shown stored in the RAM 60 of the master processor 52 A, and BuildModel_Slave process 148 and the leaf node neural network 75 are showed stored in the RAM of each slave processor 52 .
- When such programs and data structures are stored in RAM or hard disk memory and are processed by CPUs they convert the computing system 50 into a system for performing the present invention's functions.
- one of the processor nodes 52 A is labeled a “master”.
- the other of the processor nodes are labeled “slave”.
- certain computational processes are best performed on one machine. Also there is a benefit in having one machine tell the others what to do. This one machine is cafled the Master, since it controls the operation of other, slave, processors.
- the master runs on a different machine than any of the slaves. In other embodiments, a single processor can act as both a master and a slave.
- FIG. 2 illustrates BuildModel, a process of training a neural tree network 70 used in one embodiment of the present invention.
- the tree network 70 contains a plurality of non-terminal nodes 72 and terminal, or leaf, nodes, each of which is represented by a bin for data records 74 and a hidden-layer neural network 75 .
- Each non-terminal node contains a two layer neural network 76 .
- Each such two layer network itself, contains a layer of input nodes 78 and one output node 80 .
- the non-terminal nodes of the tree are trained, and records 82 of a training data set 84 are divided into leaf node bins 74 on the master processor.
- the training records routed to each terminal, or leaf, node by the non-terminal nodes of the tree are then used to train the hidden-layer neural network associated with that leaf node. This training process is performed on one of the slave processors 52 .
- FIG. 3 illustrates BuildModel_Master, a highly simplified pseudo-code representation of the process which is run on the master to build and train the tree's non-terminal nodes and to select which records should be associated with each of the leaf node bins 74 .
- Step 90 creates the largest balanced binary tree topology which has a number of temporarly leaf nodes fitting within NoOfEndNets, the desired number of leaf nodes specified by a user. This balanced tree will have a number of leaf nodes corresponding to the largest full power of two which fits within NoOfIndNets.
- NoOfEndNets has been set to seven, so there will be a separate leaf node for each of the seven slave processors 52 shown in that figure.
- step 90 will create a tree having the top three non-terminal nodes 72 shown in FIG. 2, starting with the root node 72 A. At this point the incomplete tree topology will have room for four temporary leaf nodes, since four is the largest power of two fitting within seven.
- Next step 92 adds non-terminal nodes to the bottom level of the nascent tree until there are NoOfEndNets leaf nodes.
- the bottom most three non-terminal nodes 72 are added in this step. This causes the total number of leaf nodes 74 to equal seven, the desired number indicated by the user.
- step 94 associates a RecordRatio value equal to one divided by NoOfEndNets with each leaf node 74 .
- this causes a number of ⁇ fraction (1/7) ⁇ to be associated with each leaf node 74 .
- step 96 goes up the tree one level at a time, associating a RecordRatio value with each non-terminal node equal to the sum of the RecordRatios of that node's two child nodes. Once this is done, each non-terminal node will know what percent of the records supplied to it are to be supplied to each of its two child nodes, based on the ratio of the RecordRatio values of those two child nodes.
- a step 98 supplies all the records 82 of the training set 84 to the root non-terminal node 72 A of the tree. Once this is done, a step 100 performs a loop for each horizontal level of the tree. This is the basic loop in the training process, and once it has been completed for all such levels, all of the tree's non-terminal nodes will have been trained and all of the training record will have been routed to one of the leaf node bins 74 .
- loop 100 For each horizontal level of the tree containing non-terminal nodes, loop 100 performs a sub-loop for each non-terminal node in that level. Each such loop consists of steps 104 - 120 .
- Step 104 selects from the N parameters of the training records used in the non-terminal node networks, the ParameterOfGreatestSpread, that is, that one of the N parameters over which the training records supplied to the current node have the greatest spread.
- the N parameters used for such purposes will normally comprise all of the I source fields to be used in training the leaf node hidden-layer neural networks 75 , and perhaps also the J one or more target fields to be used on that training.
- spread is best measured by a statistical measurement of spread, such as standard deviation.
- FIG. 4 illustrates three dimensions 128 A- 128 C of the N-dimensional space 130 defined by the N parameters 83 used in training the non-terminal nodes.
- the set of N parameters used by the non-terminal nodes can include integer and binary values, as well as real number values.
- FIG. 4 shows the records 82 of the training set as data points in that N-dimensional space.
- the parameter 83 A that corresponding to the vertical axis 128 C, has the greatest spread of values.
- Step 106 creates a two layer neural network for it, with a separate input node for each of the remaining N parameters to be used in training the non-terminal nodes and one output nodes.
- a step 108 repeatedly performs a training loop 109 until the node's network appears to have been properly trained.
- FIG. 5 provides a schematic representation of the training process.
- Each iteration of the training loop 109 performs a step 110 for each training record 82 supplied to the current node.
- This step supplies the values in each of the current training records N parameters 83 to a corresponding inputs 76 of the non-terminal node's neural net. It also supplies the ParameterOfGreatestSpread 83 A to the network's output 80 . It compares the generated value produced at the output node in response the values supplied to the inputs 76 to the value supplied to the output by the training record. It then modifies the weight 132 associated with each input 76 so as to reduce that difference, by using one of the well known schemes for training the weights of neural networks.
- FIG. 6 illustrates the set of weights W associated with each of the N inputs 76 as a vector 134 , having the form W 1 , W 2 , W 3 , . . . WN.
- the loop 108 stops training when either a certain number of training loops have been exceeded or when the reduction, between successive training loops, in the sum, taken over each training cycle 109 , of the differences between generated and training record values for the output 80 drops below a given level.
- FIG. 7 illustrates that once the current leaf node's neural network has been trained by multiple iterations of the loop 108 , the vector 134 defined by the net's weights will have a direction generally corresponding to the direction of greatest spread of the distribution of records 82 in the N-dimensional space 130 . It should be noted that this vector will not be parallel to any parameter axis of the N-dimensional space, except in the unusual case in which the axis of maximum spread of the node's training data is also parallel to such an axis.
- a loop 112 is performed for each record in the node's training data.
- FIG. 8 schematically represents the loop 112 and the functions of its sub-steps.
- step 114 applies the record's N parameters 83 to the inputs 76 of the node's network
- step 116 uses the resulting value 138 produced at the net's output as a score. It indexes the current record in a ScoreList 140 , ordered by such scores.
- each score 138 corresponds to perpendicular projection of each data point 82 onto the vector 134 , as shown in FIG. 9.
- step 118 selects a SplitPoint 139 in the ScoreList 130 having t he same ratio of records scored above and below it as the ratio between the RecordRatio's of the current non-terminal node's two child nodes.
- Moving this SplitPoint up and down the ScoreList corresponds to translating a plane of split 142 , perpendicular to the vector 134 , in a direction parallel to that vector.
- the corresponding plane of split 142 will divide the distribution of data records supplied to the node. It will do so in a manner that associates a desired ratio of training records with each of the non-terminal node's two child nodes.
- step 102 sends the training records on each side of the SplitPoint to a respective one of the current node's two child nodes.
- each iteration of the loop 100 will cause the non-terminal nodes to split the data space 130 of the training records supplied to it into subspaces 130 A and 1 30 B, as shown schematically in FIG. 10.
- the process of finding the vector of maximum spread shown in FIGS. 5 - 7 and projecting all of the data in a given portion of the data space onto that vector will be repeated for each such subspace 130 A and 130 B.
- this will result in the sub-space 130 A being divided into sub-sub-spaces 130 AA and 130 AB, and the sub-space 130 B being divided into the sub-sub-spaces 130 BA and 1301313 .
- step 122 creates a compressed representation of the tree network.
- each non-terminal node's neural net is represented by its weight vector 134 and its SplitPoint 139 .
- a loop 124 performs a step 126 for each leaf node 74 in the tree.
- Step 126 distributes the set of training records 82 routed to each such leaf node bin 74 to a successive one of the slave processors 52 shown in FIG. 2. This can be done in a cyclical, or round robin manner, so that if there are more leaf nodes than slave processors, once all the slave processors have received the set of training records for a first leaf node, step 126 will start successively distributing a second set of leaf node records to the slave processors, and so on. This is done to attempt to distribute the computation of training leaf node neural nets relatively evenly among the processors. It can be seen that the non-terminal nodes of the neural tree network function to partition the data used by the slave processors in training the hidden-layer neural nets.
- step 128 of BuildModel_Master causes each of the slave processors to execute BuildModel_Slave, the slave process for using the set of training records associated with each leaf node to train that node's associated hidden-layer neural network.
- the master instructs the Slaves to train the leaf node neural networks, it waits in step 130 for each such slave to send back a compressed representation of the neural networks it has trained.
- the master then attaches each such compressed leaf node network to the place corresponding to its leaf node in the compressed tree representation formed by step 122 . Once this has been done for all of the leaf nodes, a compressed representation of the full, trained neural tree network will have been completed. Once step 131 has stored this complete tree network on hard disk, the BuildModel Master process will be complete, and will stop execution.
- FIG. 13 illustrates BuildModel_Slave 148 , a highly simplified pseudo-code representation of the process which is run on each of the Slave processor's to train the tree's leaf node neural networks. A separate instance of this process is run for each leaf node which has been associated with a given slave processor.
- step 150 creates a hidden-layer neural network 75 , indicated schematically in FIG. 2, for its associated leaf node.
- This network has an input for each of I source fields, and an output for each of J target fields, where the integer values I and J have been previously specified by a user of the system, and where at least the I fields are included in the N parameters used to train the non-terminal nodes.
- the neural network will also include a hidden layer which contain a number of nodes specified by the user.
- a loop 151 causes a training loop 152 to be repeated until the percentage change in the sum of the differences between generated and actual outputs between training loops is below a given level.
- the expanded view of the leaf node net shown in the lower right hand comer of FIG. 2 schematically represents this training process.
- a step 154 uses each record in the leaf node's training set to train the leaf node's neural network. As indicated in FIG. 2, during training each record has each of its I source field 83 ′ connected to a corresponding one of the network's inputs and each of its J target fields 83 ” connected to a corresponding one of the network's outputs. The difference between the value generated at the network's J outputs and the training record's values for the corresponding J target fields is used to train the networles weights, such as by back propagation or any other method for training hidden-layer neural networks.
- step 156 creates a compressed representation of the leaf node's neural net.
- This compressed representation consists of a matrix for the input layer having a row for each hidden-layer node and a column for each input layer node. Each entry in the matrix contains the weight value of the connection between its corresponding input and hidden-layer nodes.
- the compressed representation also includes a corresponding matrix having a row for each output node and a column for each hidden-layer node. Where there is only one output node, this matrix will reduce to a vector.
- FIG. 14 is a schematic graphical representation of the overall ApplyModel process.
- a large apply data set 160 is split into sub-sets, or partitions, 162 , if it is not already so partitioned.
- Each such partition is supplied to a separate slave processor 52 , and each data record in that partition is passed through a copy of the compressed neural tree net 164 created by the BuildModel process which is stored on that processor.
- the records 82 ′ of the apply data set will normally include all of the N parameters used as inputs to neural nets of the non-terminal nodes. In some instances they might not yet have any values for the J target fields of the leaf node neural networks, since, in many instances, it is the purpose of the neural tree network to predict the values in those fields before actual values for those fields have been determined. Often the apply data base is huge, containing many millions of records.
- FIG. 16 illustrates ApplyModel_Master 170 , a simplified pseudo-code representation of the process run on the master processor 52 A to control the ApplyModel process shown schematically in FIG. 14. In this simplified illustration this process is shown including steps 172 - 178 .
- Step 172 tests to see if the apply data set has already been partitioned, and, if not, it partitions it. Since each slave processor will have an identical copy of the compressed neural tree network 164 , it makes no difference into which processor's partition a particular record is sent. Thus, any partitioning scheme, such as a simple round-robin scheme, which distributes records between partitions in a roughly equally manner, and which executes relatively quickly, will work well for this purpose.
- the ApplyModel process is one of a set of modular computing processes 180 which can be run on a parallel computer. If the Applymodel process 180 A is being run without any preceding modular process, as shown schematically in FIG. 18, or with an immediately preceding modular process which does not produce a separate partition for each of the processors to be used in the ApplyModel process, the partitioning process 182 which is part of the module 180 A will have to partition the apply data base, as indicated in step 172 .
- the ApplyModel process is being performed immediately after a process which has already partitioned the apply data set, then the partitioning process 182 will merely pass through the previously made partitions.
- FIG. 19 in which the ApplyModel process is shown following a preprocessing process 180 B, which is used to remove duplicate records and to reduce the number of fields in each record to those necessary for the ApplyModel process.
- step 174 distributes a copy of the compressed complete neural tree network 164 to each slave processor node.
- step 176 causes each processor to run the ApplyModel_Slave process 190 on its associated data partition.
- step 178 receives all of the records selected by the all of the leaf node neural networks running on all of the slave processors, and reports them the user's workstation 56 shown in FIG. 1. Once this is done the ApplyModel_Master process is complete, and it terminates execution.
- FIG. 17 provides a highly simplified' pseudo-code illustration of the ApplyModel-Slave process 190 .
- FIG. 15 illustrates this process graphically.
- Loop 192 of ApplyModel_Slave is performed for each record 82 ′ in the data partition supplied to the individual processor on which ApplyModel_Slave is running. This loop causes each record to be appropriately routed down through the compressed neural tree 164 . It starts with a step 194 which makes the root node 72 A′ the initial CurrentNode for the current record. Then a loop 196 , comprised of steps 198 and 200 , is repeated until the record's Current node is no longer a non-terminal node. Step 198 applies each of the current record's N parameter values to the corresponding inputs of the node's two layer neural network.
- step 200 selects one of the CurrentNode's two child nodes as the new CurrentNode.
- the loop 196 routes a given record from the root node all the way down to that one of the tree's leaf nodes 75 ′ corresponding to its associated portion of the N-dimensional space defined in the BuildModel training process.
- step 202 applies the record's I source fields, to the inputs of the leaf node's hidden-layer neural network. Then step 204 classifies the record depending upon the output of that neural network, normally treating the record as a selected record 82 ′ if the leaf node network's output for it is above a threshold value 208 , and discarding the record if it is not.
- the estimated values produced at the outputs of a leaf node's neural network for each record are recorded in that record's target fields, and saved as part of the record for later use. Such later use can include statistical or data base analysis of the estimated fields of the apply data set.
- step 206 sends the results of the classification to the master processor, and execution of ApplyModel-Slave terminates.
- the neural tree network produced by the above method has the advantage of performing better analysis for a given level of computation than prior neural networks or prior neural tree networks.
- the distribution of training samples fed to each such end net are much more similar. This results in three advantages: 1) it takes fewer hidden-layer nodes to accurately model the data supplied to each network; 2) it takes fewer training cycles to train each hidden-layer networks; and 3) each training cycle has fewer training records.
- Each of these three factors alone results in computational savings. Their combination results in a much greater one.
- FIG. 20 illustrates another embodiment of invention which is similar to that described above with regard to FIGS. 1 - 19 , except that the non-terminal nodes 72 ′ of its neural tree network 70 ′′ contain hidden-layer neural networks 76 ′′, instead of two layer networks 76 shown in FIG. 2.
- the training of such non-terminal nets in the embodiment of FIG. 20 is very similar to that used in the embodiment of FIG. 2.
- the hidden-layer net is trained in the same manner as stated in step 110 of FIG. 3, that is, by applying each of the N parameters of each training record to the net's inputs and supplying the ParameterOfGreatestSpread to the net's output and using a training algorithm to modify the net's weights to reduce the difference.
- the only difference is that the application of the training algorithm has to update more weights, since there is a hidden layer.
- the selection of which records are sent to each child node of a given non-terminal node 72 ′′ is basically that same as that described above with regard to steps 112 - 120 of FIG. 3.
- the training records to be supplied to the non-terminal node are ordered on a ScoreList 140 in terms of their corresponding outputs on the neural net once it has been trained.
- a SplitPoint 139 is chosen on the ScoreList such that there is a desired ratio of records above and below it. And the records above the SplitPoint are sent to one child node and those below it are sent to the other.
- the neural tree network processes described above could all be run on one processor. Or if run on multiple processors, they could be run on multiple processors of many different kinds, including SW, or symmetric multi-processing systems; massively parallel systems similar to that in FIG. 1 but having many more processor; or more loosely coupled networks of computers, such as networks of computer workstations.
- the tasks described above as being performed on only one processor could be run on multiple processors.
- the task of training non-terminal nodes and using them to partition data for the training of leaf node neural networks should be parallelized if it will significantly increase the speed with which the tree can be built and trained. This would be the case if the number of non-terminal nodes becomes very large, or if the amount of computation associated with training each of them becomes large. For example, when the non-terminal nodes have hidden layers, as in FIG. 20, parallelization will tend to be more appropriate.
- neural tree networks similar to those shown in FIGS. 2 and 20 can be used to partition data for multiple processors which are using the data for purposes other than training hidden-layer neural networks.
- such neural network trees can be used to partition data for parallel processors perfoing other types of modeling or analysis techniques, such as multi-dimensional statistical modeling, Kohonen networks, and discrimination trees.
- the decision tree part of the entire neural tree network is replaced by another type of analytical classification algorithm, such as a Kohonen network, and the subsets of training data or apply data created by such a Kohonen network would be supplied to hidden layer neural networks.
- the Kohonen network could be tised to partition a training set in to subsets, each representing classes of record.
- a neural tree network of the type shown in FIGS. 2 and 20 could be applied in a process similar to that shown in FIG. 14, except that the partitioner 182 , shown in FIG. 18, associated with the Apply Model object would pass records through the compressed representation of the decision tree part of the neural tree network, and the individual parallel processors receiving a partition of data set record sent to it by the tree partitioner would pass those records through the compressed representation of the corresponding hidden layer neural network.
- the decision tree partitioner would decide which of the processors executing the hidden layer neural networks a given record should be sent to, based on which of the decision tree's leaf nodes the record is routed to. If the system is running more than one hidden layer neural network on any processor node, the partitioner must label records sent to such nodes, indicating which leaf node the record has been associated with.
Abstract
Description
- This application is a continuation of prior application Ser. No.: 09/281,984, filed on Mar. 29, 1999, now pending, which is a continuation of prior application Ser. No.: 08/624,844, filed Mar. 25, 1996, issued Jun. 1, 1999.
- The present invention relates to computer systems for analyzing, and computing with, sets of data, such as, for example, extremely large data sets.
- As computing power has grown, it has become increasingly practical to process data, and, in particular, large amounts of data, in new and useful ways. For example, the term “data base mining” has been used to describe the practice of searching vast amounts of data for commercially, medically, or otherwise important patterns, patterns which would probably have been impossible to find by human pattern matching, and which probably would have taken too long to have found with prior generations of computer equipment.
- For example, one common uses of data base mining is for corporations to search through data bases containing records of millions of customers or potential customers, looking for data patterns indicating which of those customers are sufficiently likely to buy a given product to justify the cost of selecting them as targets of a direct marketing campaign. In such searches, not only are millions of records searched, but hundreds, or even thousands of fields within each record. Such data base mining has proven much more successful in selecting which customers are most likely to be interested in a given new product than prior methods.
- Similarly, data base mining can be used for scanning vast numbers of medical records to look for subtle patterns associated with disease; for scanning large numbers of financial transactions to look for behavior likely to be fraudulent; or to study scientific records to look for new casual relationships.
- Because they often involve a tremendous number of records, and are often seeking patterns between a large number of fields per record, data base mining operations tend to require huge amounts of computation. This, in combination with the fact that most data base mining operations can be easily partitioned to run on separate processors, has made data base mining one of the first major commercial uses of massively parallel computers. But even when run on most commercially available parallel systems many data base mining functions are relatively slow because of their tremendous complexity. Therefore there is a need to improve the speed at which such tasks can be performed.
- Neural nets are a well known device for automatically selecting which patterns of values in certain source fields of records are likely to be associated with desired values in one or more target fields. A neural network normally includes an input layer comprised of a plurality of input nodes, an output layer of one or more output nodes, and, in hidden-layer networks, one or more so-called hidden layers, each comprised of one or more nodes. hidden layer are hidden in the sense that they do not connect directly to any inputs or outputs.
- The knowledge in a neural net is contained in its weights. Each node in the input layer or hidden layer contains a weight associated with its connection with each node in the next layer. Thus, in a typical hidden-layer network, each node in the input layer has a separate weight for its connection to each node in the hidden layer, and each node in the hidden layer has a separate weight for its connection to each node in the output layer. The value supplied to each given node in a given layer is supplied to each individual node in the successive layer, multiplied by the weight representing the connection between the given node and the individual node in the successive layer. Each node receiving such values generates an output, which is a function of the sum of the values supplied it. Usually the output is a non-linear function of the sum of values supplied to the node, such as a sigmoid function. The sigmoid function has the effect of making the output operate like an on-off switch whose output varies rapidly from a substantially “off value to a substantially “on” value as the sum of the values supplied to the node crosses a small threshold region.
- A common way for training the weights of a neural network is to take each record in a training set and apply the value of each of its source fields to a corresponding input of the net. The network's weights are then modified to decrease the difference between the resulting values generated at the network's one or more outputs and the actual values for the outputs corresponding target fields in the record. There are a variety of well know methods for making such weight modifications, including back propagation, conjugate gradient, and quick propagation. The training process is normally repeated multiple times for all the training records until the sum of the difference between the generated and actual outputs approaches a relative minimum.
- One of the problems with neural nets is that the amount of time to appropriately train them to recognize all of the possible source field patterns associated with desired target field values goes up very rapidly as the number of source or target fields does, and as the number of different types of source patterns which might be associated with a desired target does. Even with large parallel computer systems the amount of time required to properly train such networks to learn such complex sets of patterns is often prohibitive.
- In an attempt to improve the speed at which neural networks can train, a new type of neural network has been proposed. These are so called neural tree networks. These are decision trees, a well known type of classifying tool, in which a neural network is placed at each of the network's non-terminal nodes. In such trees, each non-terminal node is a two layer network, which trains much more rapidly than a hidden-layer network. The data applied to each nonterminal node is used to train up the node's neural net. This is done in a training process which applies the source fields used in the overall classification process to the input nodes of the net and the one or more target fields used in that classification process to the output the two layer net. Once the network has been trained over the training set, the data objects are split between the node's child nodes based on whether the one or more sigmoidal output of the trained net is “on” or “off” for each such data object. The data object reaching the tree's terminal, or leaf, nodes are considered classified by the identity of the particular leaf node they reached.
- Such neural tree networks have the advantage of training much more rapidly than traditional neural networks, particularly when dealing with large complex classification tasks. However, they are not as discriminating as might be desired.
- In general, a major issue in parallel computing is the division of the computational task so that a reasonable percentage of the computing power of multiple processor can be taken advantage of, and so the analytical power of the process is as high as possible. This issues is particularly important when it comes to many data base mining functions, such the training of neural networks mentioned above or of other modeling tasks.
- It is an object of the present invention to provide apparatuses and methods for more efficiently computing large amounts of data.
- It is another object of the present invention to provide apparatuses and methods for efficiently finding patterns in data sets, particularly large data sets.
- It is still another object of the present invention to provide apparatuses and methods for efficiently using and training neural networks to find patterns in data set.
- It is yet another object of the present invention to provide apparatuses and methods for more efficient parallel computing.
- According to one aspect of the present invention a computer system with P processors receives data objects having N parameters. It divides an N-dimensional data space defined by the N parameters into M sub-spaces, where M is greater than or equal to P. This is done in such a manner that the boundaries between the resulting sub-spaces need not be orthogonal to the Ndimensions. The system associates a different set of one or more sub-spaces with each of the P processors. It distributes data objects located in each sub-space to the sub-space's associated processor and causes each processor to perform a computational process on each of the data objects distributed to it.
- According to another aspect of the invention, a computer system with P processors receives set of data objects to be processed. A decision tree partitions the data set into at least M data sub-sets, where M is equal or greater than P. A different set of one or more of the sub-sets is associated with each processor, and the data objects in each sub-set are sent to the associated processor for processing. In some embodiments, the process of using a decision tree to partition the data set is performed on fewer than P processors. In many embodiments, the decision criteria of the non-terminal nodes of the decision tree are trained on the data set, in a process where each non-terminal node both trains on and then divides between its children the data supplied to it.
- In some embodiments, the non-terminal nodes are neural nets having hidden layers. In some embodiments, the decision criteria of the non-terminal nets can be automatically set to achieve a desired ratio between the number of data objects sent to each of such node's child nodes. In some such embodiments, the system automatically configures the decision tree to have a number of leaf nodes which is an integer multiple of the number P of processors.
- According to another aspect of the invention, a computer system divides an N-dimensional data space, having a separate dimension for each of N parameters associated with the data set, into M sub-spaces. It associates each of these M sub-spaces with a corresponding one of M hidden-layer neural networks, and uses the data objects in each of the M sub-spaces to train that sub-space's associated hidden-layer neural network. The resulting divisions need not be orthogonal to the N dimensions of the space.
- According to another aspect of the invention, a computer system creates a decision tree having a neural network for each of its nodes, including a hidden-layer network for each of its terminal, or leaf, nodes. Each of the tree's non-terminal nodes use the portion of the training data which is supplied to it to train its associated neural network and then uses that neural network, once trained, to determining which of the training data object supplied to it should be supplied to each of its child nodes. In one embodiment, the net in each non-terminal node is trained to divide an N-dimensional space defined by parameters from the training data set into sub-spaces, and the data objects associated with each sub-space are routed to a different one of that non-terminal node's child nodes. In such an embodiment, each non-terminal node can be a two layer neural networks which defines a single vector of weights in the N-dimensional space, and the data space is split by a plane perpendicular to that vector.
- The portion of the training set supplied by the decision tree to each of its terminal, or leaf, nodes is used to train that node's corresponding neural network. In preferred embodiments, different leaf node networks are trained on different processors. In many embodiments, a copy of the entire decision tree, including the neural networks in both its non-terminal and leaf nodes, is stored on each of a plurality of processors. Then a set of new data objects is split into separate data partitions, one for each of such processor. Finally data objects from the partition associated with each processor are passed down through the copy of the complete decision tree stored on that processor. This causes each such data object to be routed to a given leaf node of the tree, at which point the hidden-layer neural network associated with the given leaf node will analyze the data object, such as by classifying it, or recording an estimated value for each of its target fields.
- According to another aspect of the invention, a neural net tree has hidden-layer neural networks in it non-terminal nodes.
- According to another aspect of the invention, a computer system includes a neural network, such as one in the nodes of one of the above mentioned decision trees, which automatically causes a selected percent of data objects applied to the neural network to be selected for a given purpose.
- These and other aspects of the present invention will become more evident upon reading the following description of the preferred embodiment in conjunction with the accompanying drawings, in which:
- FIG. 1 is a schematic representation of one type of parallel computing system which can be used with the present invention;
- FIG. 2 is a schematic representation of the BuildModel process for training a neural tree network which embodies the present invention;
- FIGS.3 illustrates BuildModel-Master, a simplified pseudo-code representation of the process run on one processor to train the non-terminal nodes of the neural tree network as part of the training process shown in FIG. 2;
- FIG. 4 is a schematic representation of a data space defined by a portion of a training data set supplied to a non-terminal node of the neural tree network shown in FIG. 2, and of the selection of a parameter whose values have the greatest spread in that portion of the data set;
- FIG. 5 is a schematic representation of the process of training a non-terminal node in the tree of FIG. 2;
- FIG. 6 is a schematic representation of the vector of weights defined by the training process of FIG. 5;
- FIG. 7 is a schematic representation of the vector of FIG. 6 shown in the spatial coordinates of FIG. 4;
- FIG. 8 is a schematic representation of the process of using the neural net of a nonterminal node of the tree shown in FIG. 2 to split the training data object supplied to it between that node's two child nodes;
- FIG. 9 is a schematic representation of the decision process shown in FIG. 8 represented in the spatial coordinates of FIGS. 4 and 7;
- FIG. 10 is a schematic representation of the data space and data points of FIG. 4, as split by the process of FIGS. 8 and 9 into two sub-spaces;
- FIG. 11 is a schematic representation of the data space of FIG. 10, indicating that the processes of FIGS.5-9 are separately applied to each of sub-spaces shown in FIG. 10.
- FIG. 12 is a schematic representation of the space of data points of FIG. 10, with each of the two sub-spaces shown in FIG. 10 having been sub-divided into two sub-sub-spaces.
- FIG. 13 illustrates BuildModel_Slave, a simplified pseudo-code representation of the process run on each of a plurality of processors to train the hidden-layer neural networks associated with the leaf nodes of the neural tree network shown in FIG. 2;
- FIG. 14 is a schematic representation of the ApplyModel process in which a large Apply data set is partitioned, and each separate partition is run through a copy of the neural tree network trained in FIG. 2 on a separate processor;
- FIG. 15 is a schematic representation of the copy of the neural tree network contained in each processor in FIG. 14, and of the data records passing through that tree;
- FIG. 16 illustrates ApplyModel_Master, a simplified pseudo-code representation of the process run on a single processor to control the ApplyModel process shown schematically in FIG. 14;
- FIG. 17 illustrates ApplyModel_Slave, a simplified pseudo-code representation of the process run on each of a plurality of separate processor nodes in the ApplyModel process shown in FIGS. 14 and 15;
- FIG. 18 is a schematic representation of the ApplyModel process when it is supplied with un-partitioned data;
- FIG. 19 is a schematic representation of the ApplyModel process when it is used in conjunction with another computational process which supplies it with data that is already partitioned;
- FIG. 20 illustrates an alternate embodiment of the ApplyModel process in which the neural tree network includes hidden-layer networks in its non-terminal nodes.
- FIG. 1 shows one type of
parallel computer system 50 which can be used to create an embodiment of the present invention. In this system, each of eightprocessors 52 are connected together through a highspeed computer network 54. Also connected to this computer network is aworkstation 56 which enables a user to control the system and to receive selective output from it. Eachprocessor 52 includes a central processing unit, or CPU, 58, which executes instructions stored in, and reads and writes data from and to, the random access memory, orRAM 60. Anetwork interface 62 performs the function of reading and writing data over the network between processors. Adisk interface 64 enables each processor to read and write data to one or morehard disks 66 connected to each processor. - The computer programs and data structures described in the following application are stored in one or more of the
random access memories 60 orhard disks 66, and are executed or manipulated by one or more of the processors' CPUs. For example, in FIG. I theBuildModel_Master program code 89 and the neural treenetwork data structure 70 are shown stored in theRAM 60 of themaster processor 52A, andBuildModel_Slave process 148 and the leaf nodeneural network 75 are showed stored in the RAM of eachslave processor 52. When such programs and data structures are stored in RAM or hard disk memory and are processed by CPUs they convert thecomputing system 50 into a system for performing the present invention's functions. - In FIG. 1, one of the
processor nodes 52A is labeled a “master”. The other of the processor nodes are labeled “slave”. In the parallel processing scheme used in a preferred embodiment of the invention, certain computational processes are best performed on one machine. Also there is a benefit in having one machine tell the others what to do. This one machine is cafled the Master, since it controls the operation of other, slave, processors. In the embodiment shown in the figures, the master runs on a different machine than any of the slaves. In other embodiments, a single processor can act as both a master and a slave. - FIG. 2 illustrates BuildModel, a process of training a
neural tree network 70 used in one embodiment of the present invention. Thetree network 70 contains a plurality ofnon-terminal nodes 72 and terminal, or leaf, nodes, each of which is represented by a bin fordata records 74 and a hidden-layerneural network 75. Each non-terminal node contains a two layerneural network 76. Each such two layer network, itself, contains a layer ofinput nodes 78 and oneoutput node 80. - The non-terminal nodes of the tree are trained, and records82 of a
training data set 84 are divided intoleaf node bins 74 on the master processor. The training records routed to each terminal, or leaf, node by the non-terminal nodes of the tree are then used to train the hidden-layer neural network associated with that leaf node. This training process is performed on one of theslave processors 52. - FIG. 3 illustrates BuildModel_Master, a highly simplified pseudo-code representation of the process which is run on the master to build and train the tree's non-terminal nodes and to select which records should be associated with each of the
leaf node bins 74. - In this simplified description, BuildModel_Master starts which steps90-96 which create the basic tree topology of the neural
network decision tree 70.Step 90 creates the largest balanced binary tree topology which has a number of temporarly leaf nodes fitting within NoOfEndNets, the desired number of leaf nodes specified by a user. This balanced tree will have a number of leaf nodes corresponding to the largest full power of two which fits within NoOfIndNets. In the example shown in FIG. 2, NoOfEndNets has been set to seven, so there will be a separate leaf node for each of the sevenslave processors 52 shown in that figure. In this example, step 90 will create a tree having the top threenon-terminal nodes 72 shown in FIG. 2, starting with theroot node 72A. At this point the incomplete tree topology will have room for four temporary leaf nodes, since four is the largest power of two fitting within seven. -
Next step 92 adds non-terminal nodes to the bottom level of the nascent tree until there are NoOfEndNets leaf nodes. In the example of FIG. 2, the bottom most threenon-terminal nodes 72 are added in this step. This causes the total number ofleaf nodes 74 to equal seven, the desired number indicated by the user. -
Next step 94 associates a RecordRatio value equal to one divided by NoOfEndNets with eachleaf node 74. In our example this causes a number of {fraction (1/7)} to be associated with eachleaf node 74. This is done as part of an effort to ensure that eachleaf node 74 will have a substantially equal number of records supplied to it in the training process. Then step 96 goes up the tree one level at a time, associating a RecordRatio value with each non-terminal node equal to the sum of the RecordRatios of that node's two child nodes. Once this is done, each non-terminal node will know what percent of the records supplied to it are to be supplied to each of its two child nodes, based on the ratio of the RecordRatio values of those two child nodes. - Next a step98 supplies all the
records 82 of the training set 84 to the rootnon-terminal node 72A of the tree. Once this is done, a step 100 performs a loop for each horizontal level of the tree. This is the basic loop in the training process, and once it has been completed for all such levels, all of the tree's non-terminal nodes will have been trained and all of the training record will have been routed to one of theleaf node bins 74. - For each horizontal level of the tree containing non-terminal nodes, loop100 performs a sub-loop for each non-terminal node in that level. Each such loop consists of steps 104-120.
- Step104 selects from the N parameters of the training records used in the non-terminal node networks, the ParameterOfGreatestSpread, that is, that one of the N parameters over which the training records supplied to the current node have the greatest spread. The N parameters used for such purposes will normally comprise all of the I source fields to be used in training the leaf node hidden-layer
neural networks 75, and perhaps also the J one or more target fields to be used on that training. For purposes of step 104, spread is best measured by a statistical measurement of spread, such as standard deviation. - FIG. 4 illustrates three
dimensions 128A-128C of the N-dimensional space 130 defined by theN parameters 83 used in training the non-terminal nodes. The set of N parameters used by the non-terminal nodes can include integer and binary values, as well as real number values. FIG. 4 shows therecords 82 of the training set as data points in that N-dimensional space. In this example shown in FIG. 4 theparameter 83A, that corresponding to thevertical axis 128C, has the greatest spread of values. - Once Step104 has selected the ParameterOfGreatestSpread for the current node, step 106 creates a two layer neural network for it, with a separate input node for each of the remaining N parameters to be used in training the non-terminal nodes and one output nodes.
- Then a
step 108 repeatedly performs atraining loop 109 until the node's network appears to have been properly trained. - FIG. 5 provides a schematic representation of the training process. Each iteration of the
training loop 109 performs astep 110 for eachtraining record 82 supplied to the current node. This step supplies the values in each of the current trainingrecords N parameters 83 to acorresponding inputs 76 of the non-terminal node's neural net. It also supplies theParameterOfGreatestSpread 83A to the network'soutput 80. It compares the generated value produced at the output node in response the values supplied to theinputs 76 to the value supplied to the output by the training record. It then modifies theweight 132 associated with eachinput 76 so as to reduce that difference, by using one of the well known schemes for training the weights of neural networks. FIG. 6 illustrates the set of weights W associated with each of theN inputs 76 as avector 134, having the form W1, W2, W3, . . . WN. - Normally the
loop 108 stops training when either a certain number of training loops have been exceeded or when the reduction, between successive training loops, in the sum, taken over eachtraining cycle 109, of the differences between generated and training record values for theoutput 80 drops below a given level. - FIG. 7 illustrates that once the current leaf node's neural network has been trained by multiple iterations of the
loop 108, thevector 134 defined by the net's weights will have a direction generally corresponding to the direction of greatest spread of the distribution ofrecords 82 in the N-dimensional space 130. It should be noted that this vector will not be parallel to any parameter axis of the N-dimensional space, except in the unusual case in which the axis of maximum spread of the node's training data is also parallel to such an axis. - Once the current non-terminal node's network has been trained, a
loop 112, comprised of sub-steps 114 and 116, is performed for each record in the node's training data. - FIG. 8 schematically represents the
loop 112 and the functions of its sub-steps. For each of therecords 82, step 114 applies the record'sN parameters 83 to theinputs 76 of the node's network, and step 116 uses the resultingvalue 138 produced at the net's output as a score. It indexes the current record in aScoreList 140, ordered by such scores. - For purposes of step114, the value of the
output node 80 is just the sum of each input times its associated weight. There is no need to multiply that sum by the sigmoid function. As a result, eachscore 138 corresponds to perpendicular projection of eachdata point 82 onto thevector 134, as shown in FIG. 9. - Once all the records have been ordered, based on their outputs, step118 selects a
SplitPoint 139 in theScoreList 130 having t he same ratio of records scored above and below it as the ratio between the RecordRatio's of the current non-terminal node's two child nodes. Moving this SplitPoint up and down the ScoreList corresponds to translating a plane ofsplit 142, perpendicular to thevector 134, in a direction parallel to that vector. As indicated schematically in FIG. 10, once SplitPoint is selected, the corresponding plane ofsplit 142 will divide the distribution of data records supplied to the node. It will do so in a manner that associates a desired ratio of training records with each of the non-terminal node's two child nodes. - Once step118 has split the current node's training records, step 102 sends the training records on each side of the SplitPoint to a respective one of the current node's two child nodes.
- It can be seen that each iteration of the loop100 will cause the non-terminal nodes to split the
data space 130 of the training records supplied to it intosubspaces such subspace sub-space 130A being divided into sub-sub-spaces 130AA and 130AB, and thesub-space 130B being divided into the sub-sub-spaces 130BA and 1301313. This process of division and sub-division will be repeated in each horizontal layer of leaf nodes until the data space has been divided into a number of sub-space regions equal to to the number of the tree's leaf nodes. Not only that, but when the process is completed eachleaf node bin 74 will end up having approximately the same number of records. - Returning now to FIG. 3, once the
loop 1 00 has been completed for all of the tree's nonterminal nodes, the neural network's associated with all of the tree's non-terminal nodes will have been trained up and all of the training records will have been distributed to the leaf node bin's 74. At this point step 122 creates a compressed representation of the tree network. In this representation, each non-terminal node's neural net is represented by itsweight vector 134 and itsSplitPoint 139. - Once this is done, a loop124 performs a step 126 for each
leaf node 74 in the tree. Step 126 distributes the set oftraining records 82 routed to each suchleaf node bin 74 to a successive one of theslave processors 52 shown in FIG. 2. This can be done in a cyclical, or round robin manner, so that if there are more leaf nodes than slave processors, once all the slave processors have received the set of training records for a first leaf node, step 126 will start successively distributing a second set of leaf node records to the slave processors, and so on. This is done to attempt to distribute the computation of training leaf node neural nets relatively evenly among the processors. It can be seen that the non-terminal nodes of the neural tree network function to partition the data used by the slave processors in training the hidden-layer neural nets. - Once the record set associated with each leaf node has been distributed by the master processor to an associated slave processor, step128 of BuildModel_Master causes each of the slave processors to execute BuildModel_Slave, the slave process for using the set of training records associated with each leaf node to train that node's associated hidden-layer neural network.
- Once the master instructs the Slaves to train the leaf node neural networks, it waits in
step 130 for each such slave to send back a compressed representation of the neural networks it has trained. The master then attaches each such compressed leaf node network to the place corresponding to its leaf node in the compressed tree representation formed by step 122. Once this has been done for all of the leaf nodes, a compressed representation of the full, trained neural tree network will have been completed. Once step 131 has stored this complete tree network on hard disk, the BuildModel Master process will be complete, and will stop execution. - FIG. 13 illustrates
BuildModel_Slave 148, a highly simplified pseudo-code representation of the process which is run on each of the Slave processor's to train the tree's leaf node neural networks. A separate instance of this process is run for each leaf node which has been associated with a given slave processor. - Each instance of BuildModel_Slave starts with step150, which creates a hidden-layer
neural network 75, indicated schematically in FIG. 2, for its associated leaf node. This network has an input for each of I source fields, and an output for each of J target fields, where the integer values I and J have been previously specified by a user of the system, and where at least the I fields are included in the N parameters used to train the non-terminal nodes. The neural network will also include a hidden layer which contain a number of nodes specified by the user. - Once the leaf node's neural network has been created, a
loop 151 causes atraining loop 152 to be repeated until the percentage change in the sum of the differences between generated and actual outputs between training loops is below a given level. The expanded view of the leaf node net shown in the lower right hand comer of FIG. 2 schematically represents this training process. In each iteration of thetraining loop 152, a step 154 uses each record in the leaf node's training set to train the leaf node's neural network. As indicated in FIG. 2, during training each record has each of its I source field 83′ connected to a corresponding one of the network's inputs and each of its J target fields 83” connected to a corresponding one of the network's outputs. The difference between the value generated at the network's J outputs and the training record's values for the corresponding J target fields is used to train the networles weights, such as by back propagation or any other method for training hidden-layer neural networks. - Once
loop 151 has determined that the neural network has undergone enough training loops to be properly trained, step 156 creates a compressed representation of the leaf node's neural net. This compressed representation consists of a matrix for the input layer having a row for each hidden-layer node and a column for each input layer node. Each entry in the matrix contains the weight value of the connection between its corresponding input and hidden-layer nodes. The compressed representation also includes a corresponding matrix having a row for each output node and a column for each hidden-layer node. Where there is only one output node, this matrix will reduce to a vector. - Once a compressed representation has been made for the leaf node's trained hidden-layer neural network, that compressed representation is sent back to the master processor so that it can be put into its proper place on the complete neural tree network, as describe above with regard to step130 of FIG. 3. Once this has been done BuildModel_Slave is complete and its execution terminates.
- Turning now to FIGS.14-19, the ApplyModel process will be described.
- FIG. 14 is a schematic graphical representation of the overall ApplyModel process. In this process, a large apply
data set 160 is split into sub-sets, or partitions, 162, if it is not already so partitioned. Each such partition is supplied to aseparate slave processor 52, and each data record in that partition is passed through a copy of the compressedneural tree net 164 created by the BuildModel process which is stored on that processor. - The
records 82′ of the apply data set will normally include all of the N parameters used as inputs to neural nets of the non-terminal nodes. In some instances they might not yet have any values for the J target fields of the leaf node neural networks, since, in many instances, it is the purpose of the neural tree network to predict the values in those fields before actual values for those fields have been determined. Often the apply data base is huge, containing many millions of records. - FIG. 16 illustrates
ApplyModel_Master 170, a simplified pseudo-code representation of the process run on themaster processor 52A to control the ApplyModel process shown schematically in FIG. 14. In this simplified illustration this process is shown including steps 172-178. - Step172 tests to see if the apply data set has already been partitioned, and, if not, it partitions it. Since each slave processor will have an identical copy of the compressed
neural tree network 164, it makes no difference into which processor's partition a particular record is sent. Thus, any partitioning scheme, such as a simple round-robin scheme, which distributes records between partitions in a roughly equally manner, and which executes relatively quickly, will work well for this purpose. - In the embodiment of the invention described, the ApplyModel process is one of a set of modular computing processes180 which can be run on a parallel computer. If the
Applymodel process 180A is being run without any preceding modular process, as shown schematically in FIG. 18, or with an immediately preceding modular process which does not produce a separate partition for each of the processors to be used in the ApplyModel process, thepartitioning process 182 which is part of themodule 180A will have to partition the apply data base, as indicated in step 172. - If, on the other hand, the ApplyModel process is being performed immediately after a process which has already partitioned the apply data set, then the
partitioning process 182 will merely pass through the previously made partitions. As example of this is represented in FIG. 19, in which the ApplyModel process is shown following apreprocessing process 180B, which is used to remove duplicate records and to reduce the number of fields in each record to those necessary for the ApplyModel process. - Returning now to FIG. 16, once step172 has ensured the apply data set is partitioned,
step 174, distributes a copy of the compressed completeneural tree network 164 to each slave processor node. Then step 176 causes each processor to run theApplyModel_Slave process 190 on its associated data partition. Then step 178 receives all of the records selected by the all of the leaf node neural networks running on all of the slave processors, and reports them the user'sworkstation 56 shown in FIG. 1. Once this is done the ApplyModel_Master process is complete, and it terminates execution. - FIG. 17 provides a highly simplified' pseudo-code illustration of the ApplyModel-
Slave process 190. FIG. 15 illustrates this process graphically. -
Loop 192 of ApplyModel_Slave is performed for each record 82′ in the data partition supplied to the individual processor on which ApplyModel_Slave is running. This loop causes each record to be appropriately routed down through the compressedneural tree 164. It starts with astep 194 which makes theroot node 72A′ the initial CurrentNode for the current record. Then a loop 196, comprised of steps 198 and 200, is repeated until the record's Current node is no longer a non-terminal node. Step 198 applies each of the current record's N parameter values to the corresponding inputs of the node's two layer neural network. Then, depending on whether or not the output of the neural network, as determined by multiplying the vector formed by the input fields of the current record by the node's associated weight vector, is above or below the node'sSplitPoint 139, step 200 selects one of the CurrentNode's two child nodes as the new CurrentNode. Thus, the loop 196 routes a given record from the root node all the way down to that one of the tree'sleaf nodes 75′ corresponding to its associated portion of the N-dimensional space defined in the BuildModel training process. - Once the current record has reached a given leaf node, step202 applies the record's I source fields, to the inputs of the leaf node's hidden-layer neural network. Then step 204 classifies the record depending upon the output of that neural network, normally treating the record as a selected
record 82′ if the leaf node network's output for it is above athreshold value 208, and discarding the record if it is not. In other embodiments of the invention the estimated values produced at the outputs of a leaf node's neural network for each record are recorded in that record's target fields, and saved as part of the record for later use. Such later use can include statistical or data base analysis of the estimated fields of the apply data set. - Once the
loop 192 has routed each record to the appropriate leaf node net and caused that leaf node net to classify the record, step 206 sends the results of the classification to the master processor, and execution of ApplyModel-Slave terminates. - The neural tree network produced by the above method has the advantage of performing better analysis for a given level of computation than prior neural networks or prior neural tree networks. By dividing the N-dimensional data space into sub-spaces and using each such sub-space to train a separate end-node hidden-layer neural network, the distribution of training samples fed to each such end net are much more similar. This results in three advantages: 1) it takes fewer hidden-layer nodes to accurately model the data supplied to each network; 2) it takes fewer training cycles to train each hidden-layer networks; and 3) each training cycle has fewer training records. Each of these three factors alone results in computational savings. Their combination results in a much greater one.
- FIG. 20 illustrates another embodiment of invention which is similar to that described above with regard to FIGS.1-19, except that the
non-terminal nodes 72′ of itsneural tree network 70″ contain hidden-layerneural networks 76″, instead of twolayer networks 76 shown in FIG. 2. - As is indicated in the expanded view of the non-ten-
ninal node 72′ shown in the right upper comer of FIG. 20, the training of such non-terminal nets in the embodiment of FIG. 20 is very similar to that used in the embodiment of FIG. 2. During thetraining loop 108″ and 109″, which corresponds to thetraining loop step 110 of FIG. 3, that is, by applying each of the N parameters of each training record to the net's inputs and supplying the ParameterOfGreatestSpread to the net's output and using a training algorithm to modify the net's weights to reduce the difference. The only difference is that the application of the training algorithm has to update more weights, since there is a hidden layer. - The selection of which records are sent to each child node of a given
non-terminal node 72″ is basically that same as that described above with regard to steps 112-120 of FIG. 3. The training records to be supplied to the non-terminal node are ordered on aScoreList 140 in terms of their corresponding outputs on the neural net once it has been trained. ASplitPoint 139 is chosen on the ScoreList such that there is a desired ratio of records above and below it. And the records above the SplitPoint are sent to one child node and those below it are sent to the other. - The use of such hidden-layer neural networks has the effect of recursively splitting the N-dimensional space defined by the records of the training set into sub-spaces, as does the embodiment of the invention using two layer nets. The difference is that the boundaries of the sub-spaces created with hidden-layer nets in the non-terminal tree nodes of FIG. 20 are curved in N-dimensional space, allowing for a division of records between leaf nodes which is more likely to group together into a common leaf node records which are similar for purposes of the analysis task. This further improves the accuracy of the neural tree network's analysis.
- It should be understood that the foregoing description and drawings are given merely to explain and illustrate the invention and that the invention is not limited thereto, except insofar as the interpretation of the appended claims are so limited. Those skilled in the art who have the disclosure before them will be able to make modifications and variations therein without departing from the scope of the invention.
- For example, the functions or devices for performing them, described in the claims below can be realized by many different programming and data structures, and by using different organization and sequencing. This is because programming is an extremely flexible art form in which a given idea of any complexity, once understood by those skilled in the art, can be manifested in a virtually unlimited number of ways.
- Furthermore, it should be understood that the invention of the present application, as broadly claimed, is not limited to use with any one type of operating system or computer hardware. For example, many of the functions shown being performed in software in the specification could be performed in hardware in other embodiments, and vica versa.
- Similarly, the neural tree network processes described above could all be run on one processor. Or if run on multiple processors, they could be run on multiple processors of many different kinds, including SW, or symmetric multi-processing systems; massively parallel systems similar to that in FIG. 1 but having many more processor; or more loosely coupled networks of computers, such as networks of computer workstations.
- Similarly, many embodiments of the invention will not use the master and slave paradigm described above. Furthermore, in many embodiments of the invention the tasks described above as being performed on only one processor could be run on multiple processors. For example, the task of training non-terminal nodes and using them to partition data for the training of leaf node neural networks should be parallelized if it will significantly increase the speed with which the tree can be built and trained. This would be the case if the number of non-terminal nodes becomes very large, or if the amount of computation associated with training each of them becomes large. For example, when the non-terminal nodes have hidden layers, as in FIG. 20, parallelization will tend to be more appropriate.
- It should be understood that in embodiments of the invention running on symmetric multiprocessing, or SNT, systems there will be no need to store a separate copy of the neural network tree for each processor, since all the processors will share a common memory, and there will be no need for one processor to transfer the records associated with a given leaf node to the processor which is going to train that leaf node, since they will be distributed to the processor that is going to train their associated leaf node when that fetches them from memory, itself.
- It should also be understood that, in some embodiments of the invention, neural tree networks similar to those shown in FIGS. 2 and 20 can be used to partition data for multiple processors which are using the data for purposes other than training hidden-layer neural networks. For example, such neural network trees can be used to partition data for parallel processors perfoing other types of modeling or analysis techniques, such as multi-dimensional statistical modeling, Kohonen networks, and discrimination trees. Similarly in some embodiments of the invention, the decision tree part of the entire neural tree network is replaced by another type of analytical classification algorithm, such as a Kohonen network, and the subsets of training data or apply data created by such a Kohonen network would be supplied to hidden layer neural networks. When used in a parallel environment the Kohonen network could be tised to partition a training set in to subsets, each representing classes of record.
- In other embodiments of the invention, a neural tree network of the type shown in FIGS. 2 and 20 could be applied in a process similar to that shown in FIG. 14, except that the
partitioner 182, shown in FIG. 18, associated with the Apply Model object would pass records through the compressed representation of the decision tree part of the neural tree network, and the individual parallel processors receiving a partition of data set record sent to it by the tree partitioner would pass those records through the compressed representation of the corresponding hidden layer neural network. In such an embodiment, the decision tree partitioner would decide which of the processors executing the hidden layer neural networks a given record should be sent to, based on which of the decision tree's leaf nodes the record is routed to. If the system is running more than one hidden layer neural network on any processor node, the partitioner must label records sent to such nodes, indicating which leaf node the record has been associated with. - One alternate embodiment of the hybrid tree network described in the above specification is described in a patent application (the “sibling patent”) entitled “Apparatus And Methods For Programming Parallel Computers” filed on the same day as this patent application, on behalf of the intended assignee of the present application. This sibling patent, which has as named inventors, Michael J. Beckerle, James Richard Bums, Jerry L. Callen, Jeffrey D. Ives, Robert L. Krawitz, Daniel L. Leary, Steven Rosenthal is hereby incorporated herein by reference in its entirety.
Claims (2)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US09/989,098 US20020083424A1 (en) | 1996-03-25 | 2001-11-20 | Systems for analyzing and computing data items |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US08/624,844 US5909681A (en) | 1996-03-25 | 1996-03-25 | Computer system and computerized method for partitioning data for parallel processing |
US09/281,984 US6415286B1 (en) | 1996-03-25 | 1999-03-29 | Computer system and computerized method for partitioning data for parallel processing |
US09/989,098 US20020083424A1 (en) | 1996-03-25 | 2001-11-20 | Systems for analyzing and computing data items |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/281,984 Continuation US6415286B1 (en) | 1996-03-25 | 1999-03-29 | Computer system and computerized method for partitioning data for parallel processing |
Publications (1)
Publication Number | Publication Date |
---|---|
US20020083424A1 true US20020083424A1 (en) | 2002-06-27 |
Family
ID=24503552
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US08/624,844 Expired - Fee Related US5909681A (en) | 1996-03-25 | 1996-03-25 | Computer system and computerized method for partitioning data for parallel processing |
US09/281,984 Expired - Fee Related US6415286B1 (en) | 1996-03-25 | 1999-03-29 | Computer system and computerized method for partitioning data for parallel processing |
US09/989,098 Abandoned US20020083424A1 (en) | 1996-03-25 | 2001-11-20 | Systems for analyzing and computing data items |
Family Applications Before (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US08/624,844 Expired - Fee Related US5909681A (en) | 1996-03-25 | 1996-03-25 | Computer system and computerized method for partitioning data for parallel processing |
US09/281,984 Expired - Fee Related US6415286B1 (en) | 1996-03-25 | 1999-03-29 | Computer system and computerized method for partitioning data for parallel processing |
Country Status (1)
Country | Link |
---|---|
US (3) | US5909681A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2005348A2 (en) * | 2006-02-14 | 2008-12-24 | Intelliscience Corporation | Methods and systems for data analysis and feature recognition including detection of avian influenza virus |
EP2018617A2 (en) * | 2006-02-14 | 2009-01-28 | Intelliscience Corporation | Methods and system for aggregating and using physical samples and data in a virtual environment |
CN104346440A (en) * | 2014-10-10 | 2015-02-11 | 浙江大学 | Neural-network-based cross-media Hash indexing method |
WO2016141282A1 (en) * | 2015-03-04 | 2016-09-09 | The Regents Of The University Of California | Convolutional neural network with tree pooling and tree feature map selection |
Families Citing this family (145)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5909681A (en) * | 1996-03-25 | 1999-06-01 | Torrent Systems, Inc. | Computer system and computerized method for partitioning data for parallel processing |
JPH1021210A (en) * | 1996-06-28 | 1998-01-23 | Fujitsu Ltd | Problem solving device with learning function |
US20050180095A1 (en) | 1996-11-29 | 2005-08-18 | Ellis Frampton E. | Global network computers |
US6725250B1 (en) | 1996-11-29 | 2004-04-20 | Ellis, Iii Frampton E. | Global network computers |
US7024449B1 (en) * | 1996-11-29 | 2006-04-04 | Ellis Iii Frampton E | Global network computers |
US7634529B2 (en) | 1996-11-29 | 2009-12-15 | Ellis Iii Frampton E | Personal and server computers having microchips with multiple processing units and internal firewalls |
US6167428A (en) * | 1996-11-29 | 2000-12-26 | Ellis; Frampton E. | Personal computer microprocessor firewalls for internet distributed processing |
US7805756B2 (en) | 1996-11-29 | 2010-09-28 | Frampton E Ellis | Microchips with inner firewalls, faraday cages, and/or photovoltaic cells |
US7926097B2 (en) | 1996-11-29 | 2011-04-12 | Ellis Iii Frampton E | Computer or microchip protected from the internet by internal hardware |
US8225003B2 (en) | 1996-11-29 | 2012-07-17 | Ellis Iii Frampton E | Computers and microchips with a portion protected by an internal hardware firewall |
US7506020B2 (en) * | 1996-11-29 | 2009-03-17 | Frampton E Ellis | Global network computers |
US6732141B2 (en) | 1996-11-29 | 2004-05-04 | Frampton Erroll Ellis | Commercial distributed processing by personal computers over the internet |
US7035906B1 (en) | 1996-11-29 | 2006-04-25 | Ellis Iii Frampton E | Global network computers |
US8312529B2 (en) | 1996-11-29 | 2012-11-13 | Ellis Frampton E | Global network computers |
US6330008B1 (en) * | 1997-02-24 | 2001-12-11 | Torrent Systems, Inc. | Apparatuses and methods for monitoring performance of parallel computing |
US6092065A (en) * | 1998-02-13 | 2000-07-18 | International Business Machines Corporation | Method and apparatus for discovery, clustering and classification of patterns in 1-dimensional event streams |
US6347310B1 (en) * | 1998-05-11 | 2002-02-12 | Torrent Systems, Inc. | Computer system and process for training of analytical models using large data sets |
US7801782B2 (en) * | 1998-07-31 | 2010-09-21 | Jpmorgan Chase Bank, Na | Object oriented system for managing complex financial instruments |
US6542894B1 (en) * | 1998-12-09 | 2003-04-01 | Unica Technologies, Inc. | Execution of multiple models using data segmentation |
US6349299B1 (en) * | 1998-12-24 | 2002-02-19 | International Business Machines Corporation | System and method for storing electronic contact information into an electronic address book |
US7047232B1 (en) | 1999-01-13 | 2006-05-16 | Ab Initio Software Corporation | Parallelizing applications of script-driven tools |
US6801938B1 (en) | 1999-06-18 | 2004-10-05 | Torrent Systems, Inc. | Segmentation and processing of continuous data streams using transactional semantics |
JP4600847B2 (en) * | 1999-06-18 | 2010-12-22 | インターナショナル・ビジネス・マシーンズ・コーポレーション | Segmentation and processing of continuous data streams using transaction semantics |
US6684203B1 (en) * | 1999-11-08 | 2004-01-27 | Oracle International Corporation | Using global temporary tables to transform queries |
FR2806498B1 (en) * | 2000-03-17 | 2002-06-14 | Lipha | COMPUTER COMPUTING SYSTEM AND CALCULATION METHOD IMPLEMENTED USING SUCH A SYSTEM |
JP4123712B2 (en) * | 2000-11-27 | 2008-07-23 | 株式会社日立製作所 | Communication processing method and recording medium on which communication processing program is recorded |
US7080101B1 (en) * | 2000-12-01 | 2006-07-18 | Ncr Corp. | Method and apparatus for partitioning data for storage in a database |
US20020078064A1 (en) * | 2000-12-18 | 2002-06-20 | Ncr Corporation | Data model for analysis of retail transactions using gaussian mixture models in a data mining system |
US6687693B2 (en) | 2000-12-18 | 2004-02-03 | Ncr Corporation | Architecture for distributed relational data mining systems |
US6947878B2 (en) * | 2000-12-18 | 2005-09-20 | Ncr Corporation | Analysis of retail transactions using gaussian mixture models in a data mining system |
US20020161561A1 (en) * | 2001-01-16 | 2002-10-31 | Sridevi Sarma | System and method for association of object sets |
US6954776B1 (en) * | 2001-05-07 | 2005-10-11 | Oracle International Corporation | Enabling intra-partition parallelism for partition-based operations |
US7007035B2 (en) * | 2001-06-08 | 2006-02-28 | The Regents Of The University Of California | Parallel object-oriented decision tree system |
US8214196B2 (en) | 2001-07-03 | 2012-07-03 | University Of Southern California | Syntax-based statistical translation model |
US7249357B2 (en) * | 2001-08-20 | 2007-07-24 | Silicon Graphics, Inc. | Transparent distribution and execution of data in a multiprocessor environment |
US20130289902A1 (en) * | 2012-04-30 | 2013-10-31 | Knowm Tech, Llc | Anomaly detection utilizing energy flow networks |
WO2004001623A2 (en) | 2002-03-26 | 2003-12-31 | University Of Southern California | Constructing a translation lexicon from comparable, non-parallel corpora |
US20040181526A1 (en) * | 2003-03-11 | 2004-09-16 | Lockheed Martin Corporation | Robust system for interactively learning a record similarity measurement |
US20040181527A1 (en) * | 2003-03-11 | 2004-09-16 | Lockheed Martin Corporation | Robust system for interactively learning a string similarity measurement |
US20040181501A1 (en) * | 2003-03-11 | 2004-09-16 | Lockheed Martin Corporation | Parallelizable system for concise representation of data |
US20040181512A1 (en) * | 2003-03-11 | 2004-09-16 | Lockheed Martin Corporation | System for dynamically building extended dictionaries for a data cleansing application |
KR100539788B1 (en) * | 2003-06-13 | 2006-01-10 | 엘지전자 주식회사 | Method for carrying out uncommon mime type in mobile communication terminal |
CA2891196C (en) * | 2003-06-25 | 2018-03-20 | Ab Initio Technology Llc | Computer-aided parallelizing of computation graphs |
US8548794B2 (en) | 2003-07-02 | 2013-10-01 | University Of Southern California | Statistical noun phrase translation |
US20050262189A1 (en) * | 2003-08-27 | 2005-11-24 | Ascential Software Corporation | Server-side application programming interface for a real time data integration service |
US8060553B2 (en) * | 2003-08-27 | 2011-11-15 | International Business Machines Corporation | Service oriented architecture for a transformation function in a data integration platform |
US8307109B2 (en) | 2003-08-27 | 2012-11-06 | International Business Machines Corporation | Methods and systems for real time integration services |
US20060010195A1 (en) * | 2003-08-27 | 2006-01-12 | Ascential Software Corporation | Service oriented architecture for a message broker in a data integration platform |
US20050240354A1 (en) * | 2003-08-27 | 2005-10-27 | Ascential Software Corporation | Service oriented architecture for an extract function in a data integration platform |
US20050223109A1 (en) * | 2003-08-27 | 2005-10-06 | Ascential Software Corporation | Data integration through a services oriented architecture |
US8041760B2 (en) | 2003-08-27 | 2011-10-18 | International Business Machines Corporation | Service oriented architecture for a loading function in a data integration platform |
US7814142B2 (en) * | 2003-08-27 | 2010-10-12 | International Business Machines Corporation | User interface service for a services oriented architecture in a data integration platform |
US20050234969A1 (en) * | 2003-08-27 | 2005-10-20 | Ascential Software Corporation | Services oriented architecture for handling metadata in a data integration platform |
US20050235274A1 (en) * | 2003-08-27 | 2005-10-20 | Ascential Software Corporation | Real time data integration for inventory management |
US7814470B2 (en) * | 2003-08-27 | 2010-10-12 | International Business Machines Corporation | Multiple service bindings for a real time data integration service |
US20050228808A1 (en) * | 2003-08-27 | 2005-10-13 | Ascential Software Corporation | Real time data integration services for health care information data integration |
CN101120340B (en) * | 2004-02-21 | 2010-12-08 | 数据迅捷股份有限公司 | Ultra-shared-nothing parallel database |
US20050251533A1 (en) * | 2004-03-16 | 2005-11-10 | Ascential Software Corporation | Migrating data integration processes through use of externalized metadata representations |
US7761406B2 (en) * | 2004-03-16 | 2010-07-20 | International Business Machines Corporation | Regenerating data integration functions for transfer from a data integration platform |
US8296127B2 (en) | 2004-03-23 | 2012-10-23 | University Of Southern California | Discovery of parallel text portions in comparable collections of corpora and training using comparable texts |
US8666725B2 (en) | 2004-04-16 | 2014-03-04 | University Of Southern California | Selection and use of nonstatistical translation components in a statistical machine translation framework |
US7644050B2 (en) * | 2004-12-02 | 2010-01-05 | International Business Machines Corporation | Method and apparatus for annotation-based behavior extensions |
US7577721B1 (en) * | 2004-06-08 | 2009-08-18 | Trend Micro Incorporated | Structured peer-to-peer push distribution network |
US20060036721A1 (en) * | 2004-06-15 | 2006-02-16 | Dong Zhao | Run-time tool for network management application |
US7555743B2 (en) * | 2004-06-15 | 2009-06-30 | Alcatel-Lucent Usa Inc. | SNMP agent code generation and SNMP agent framework for network management application development |
US20050278708A1 (en) * | 2004-06-15 | 2005-12-15 | Dong Zhao | Event management framework for network management application development |
US20060004856A1 (en) * | 2004-06-15 | 2006-01-05 | Xiangyang Shen | Data management and persistence frameworks for network management application development |
US20060010203A1 (en) * | 2004-06-15 | 2006-01-12 | Nokia Corporation | Personal server and network |
US20050278361A1 (en) * | 2004-06-15 | 2005-12-15 | Brunell Edward G | View definition language for network management application development |
US20060004904A1 (en) * | 2004-06-30 | 2006-01-05 | Intel Corporation | Method, system, and program for managing transmit throughput for a network controller |
JP5452868B2 (en) | 2004-10-12 | 2014-03-26 | ユニヴァーシティー オブ サザン カリフォルニア | Training for text-to-text applications that use string-to-tree conversion for training and decoding |
CA2486103A1 (en) * | 2004-10-26 | 2006-04-26 | Platespin Ltd. | System and method for autonomic optimization of physical and virtual resource use in a data center |
US20060095480A1 (en) * | 2004-10-29 | 2006-05-04 | Microsoft Corporation | Method and subsystem for performing subset computation for replication topologies |
US7933868B2 (en) * | 2004-11-04 | 2011-04-26 | Microsoft Corporation | Method and system for partition level cleanup of replication conflict metadata |
US20060106895A1 (en) * | 2004-11-12 | 2006-05-18 | Microsoft Corporation | Method and subsystem for performing metadata cleanup for replication topologies |
US7779008B2 (en) * | 2005-02-16 | 2010-08-17 | Oracle International Corporation | Parallel partition-wise aggregation |
US8676563B2 (en) | 2009-10-01 | 2014-03-18 | Language Weaver, Inc. | Providing human-generated and machine-generated trusted translations |
US8886517B2 (en) | 2005-06-17 | 2014-11-11 | Language Weaver, Inc. | Trust scoring for language translation systems |
US20070016824A1 (en) * | 2005-07-14 | 2007-01-18 | International Business Machines Corporation | Methods and apparatus for global systems management |
US7389222B1 (en) * | 2005-08-02 | 2008-06-17 | Language Weaver, Inc. | Task parallelization in a text-to-text system |
US10319252B2 (en) | 2005-11-09 | 2019-06-11 | Sdl Inc. | Language capability assessment and training apparatus and techniques |
US7765536B2 (en) * | 2005-12-21 | 2010-07-27 | Management Services Group, Inc. | System and method for the distribution of a program among cooperating processors |
US8387033B2 (en) * | 2005-12-21 | 2013-02-26 | Management Services Group, Inc. | System and method for the distribution of a program among cooperating processing elements |
US8387034B2 (en) * | 2005-12-21 | 2013-02-26 | Management Services Group, Inc. | System and method for the distribution of a program among cooperating processing elements |
US7904759B2 (en) * | 2006-01-11 | 2011-03-08 | Amazon Technologies, Inc. | System and method for service availability management |
US9037698B1 (en) | 2006-03-14 | 2015-05-19 | Amazon Technologies, Inc. | Method and system for collecting and analyzing time-series data |
US8601112B1 (en) * | 2006-03-14 | 2013-12-03 | Amazon Technologies, Inc. | Method and system for collecting and analyzing time-series data |
US7979439B1 (en) | 2006-03-14 | 2011-07-12 | Amazon Technologies, Inc. | Method and system for collecting and analyzing time-series data |
US8943080B2 (en) | 2006-04-07 | 2015-01-27 | University Of Southern California | Systems and methods for identifying parallel documents and sentence fragments in multilingual document collections |
US8886518B1 (en) | 2006-08-07 | 2014-11-11 | Language Weaver, Inc. | System and method for capitalizing machine translated text |
US7945627B1 (en) | 2006-09-28 | 2011-05-17 | Bitdefender IPR Management Ltd. | Layout-based electronic communication filtering systems and methods |
US8433556B2 (en) | 2006-11-02 | 2013-04-30 | University Of Southern California | Semi-supervised training for statistical word alignment |
US9122674B1 (en) | 2006-12-15 | 2015-09-01 | Language Weaver, Inc. | Use of annotations in statistical machine translation |
US8468149B1 (en) | 2007-01-26 | 2013-06-18 | Language Weaver, Inc. | Multi-lingual online community |
US8615389B1 (en) | 2007-03-16 | 2013-12-24 | Language Weaver, Inc. | Generation and exploitation of an approximate language model |
US8831928B2 (en) | 2007-04-04 | 2014-09-09 | Language Weaver, Inc. | Customizable machine translation service |
EP1978468A1 (en) * | 2007-04-04 | 2008-10-08 | Sap Ag | A method and a system for secure execution of workflow tasks in a distributed workflow management system within a decentralized network system |
WO2008128177A1 (en) * | 2007-04-13 | 2008-10-23 | The University Of Vermont And State Agricultural College | Relational pattern discovery across multiple databases |
US8825466B1 (en) | 2007-06-08 | 2014-09-02 | Language Weaver, Inc. | Modification of annotated bilingual segment pairs in syntax-based machine translation |
US20090080658A1 (en) * | 2007-07-13 | 2009-03-26 | Brent Waters | Method and apparatus for encrypting data for fine-grained access control |
US8119173B2 (en) * | 2007-07-16 | 2012-02-21 | Philip Morris Usa Inc. | Method of flavor encapsulation through the use of a drum coater |
EP2031816B1 (en) * | 2007-08-29 | 2012-02-22 | NTT DoCoMo, Inc. | Optimal operation of hierarchical peer-to-peer networks |
US8429199B2 (en) * | 2007-08-31 | 2013-04-23 | Oracle International Corporation | Load on demand network analysis |
US8572184B1 (en) | 2007-10-04 | 2013-10-29 | Bitdefender IPR Management Ltd. | Systems and methods for dynamically integrating heterogeneous anti-spam filters |
US8010614B1 (en) | 2007-11-01 | 2011-08-30 | Bitdefender IPR Management Ltd. | Systems and methods for generating signatures for electronic communication classification |
US8125796B2 (en) | 2007-11-21 | 2012-02-28 | Frampton E. Ellis | Devices with faraday cages and internal flexibility sipes |
US8131655B1 (en) | 2008-05-30 | 2012-03-06 | Bitdefender IPR Management Ltd. | Spam filtering using feature relevance assignment in neural networks |
US8290917B2 (en) * | 2008-06-02 | 2012-10-16 | Microsoft Corporation | Reordering of data elements in a data parallel system |
US9100246B1 (en) * | 2008-06-19 | 2015-08-04 | Symantec Corporation | Distributed application virtualization |
US9996572B2 (en) * | 2008-10-24 | 2018-06-12 | Microsoft Technology Licensing, Llc | Partition management in a partitioned, scalable, and available structured storage |
US20100115246A1 (en) * | 2008-10-31 | 2010-05-06 | Yahoo! Inc. | System and method of data partitioning for parallel processing of dynamically generated application data |
US8990064B2 (en) | 2009-07-28 | 2015-03-24 | Language Weaver, Inc. | Translating documents based on content |
US8380486B2 (en) | 2009-10-01 | 2013-02-19 | Language Weaver, Inc. | Providing machine-generated translations and corresponding trust levels |
US9665620B2 (en) | 2010-01-15 | 2017-05-30 | Ab Initio Technology Llc | Managing data queries |
US8429735B2 (en) | 2010-01-26 | 2013-04-23 | Frampton E. Ellis | Method of using one or more secure private networks to actively configure the hardware of a computer or microchip |
US10417646B2 (en) | 2010-03-09 | 2019-09-17 | Sdl Inc. | Predicting the cost associated with translating textual content |
US11003838B2 (en) | 2011-04-18 | 2021-05-11 | Sdl Inc. | Systems and methods for monitoring post translation editing |
US9116955B2 (en) | 2011-05-02 | 2015-08-25 | Ab Initio Technology Llc | Managing data queries |
US8694303B2 (en) | 2011-06-15 | 2014-04-08 | Language Weaver, Inc. | Systems and methods for tuning parameters in statistical machine translation |
US8886515B2 (en) | 2011-10-19 | 2014-11-11 | Language Weaver, Inc. | Systems and methods for enhancing machine translation post edit review processes |
US8959522B2 (en) | 2012-01-30 | 2015-02-17 | International Business Machines Corporation | Full exploitation of parallel processors for data processing |
US8942973B2 (en) | 2012-03-09 | 2015-01-27 | Language Weaver, Inc. | Content page URL translation |
US10261994B2 (en) | 2012-05-25 | 2019-04-16 | Sdl Inc. | Method and system for automatic management of reputation of translators |
US9152622B2 (en) | 2012-11-26 | 2015-10-06 | Language Weaver, Inc. | Personalized machine translation via online adaptation |
US9336249B2 (en) * | 2013-04-30 | 2016-05-10 | Wal-Mart Stores, Inc. | Decision tree with just-in-time nodal computations |
US9213694B2 (en) | 2013-10-10 | 2015-12-15 | Language Weaver, Inc. | Efficient online domain adaptation |
JP6454706B2 (en) | 2013-12-06 | 2019-01-16 | アビニシオ テクノロジー エルエルシー | Source code conversion |
US9530226B2 (en) * | 2014-02-18 | 2016-12-27 | Par Technology Corporation | Systems and methods for optimizing N dimensional volume data for transmission |
US10055691B2 (en) | 2014-09-08 | 2018-08-21 | Pivotal Software, Inc. | Stream processing with dynamic event routing |
US10437819B2 (en) | 2014-11-14 | 2019-10-08 | Ab Initio Technology Llc | Processing queries containing a union-type operation |
US10482389B2 (en) | 2014-12-04 | 2019-11-19 | Sap Se | Parallel development and deployment for machine learning models |
US10417281B2 (en) | 2015-02-18 | 2019-09-17 | Ab Initio Technology Llc | Querying a data source on a network |
EP3411835B1 (en) * | 2016-02-05 | 2023-07-05 | DeepMind Technologies Limited | Augmenting neural networks with hierarchical external memory |
US9916405B2 (en) * | 2016-02-22 | 2018-03-13 | International Business Machines Corporation | Distributed timing analysis of a partitioned integrated circuit design |
EP3451240A4 (en) * | 2016-04-27 | 2020-01-01 | Cambricon Technologies Corporation Limited | Apparatus and method for performing auto-learning operation of artificial neural network |
US20180039905A1 (en) * | 2016-08-03 | 2018-02-08 | International Business Machines Corporation | Large scale distributed training of data analytics models |
WO2019001418A1 (en) * | 2017-06-26 | 2019-01-03 | 上海寒武纪信息科技有限公司 | Data sharing system and data sharing method therefor |
US20190138890A1 (en) * | 2017-11-08 | 2019-05-09 | Ping Liang | Expandable and real-time recofigurable hardware for neural networks and logic reasoning |
EP3701351A4 (en) * | 2018-01-30 | 2021-01-27 | D5Ai Llc | Self-organizing partially ordered networks |
US11327156B2 (en) * | 2018-04-26 | 2022-05-10 | Metawave Corporation | Reinforcement learning engine for a radar system |
GB201810736D0 (en) | 2018-06-29 | 2018-08-15 | Microsoft Technology Licensing Llc | Neural trees |
US11093223B2 (en) | 2019-07-18 | 2021-08-17 | Ab Initio Technology Llc | Automatically converting a program written in a procedural programming language into a dataflow graph and related systems and methods |
CN113377998A (en) * | 2021-06-28 | 2021-09-10 | 北京百度网讯科技有限公司 | Data loading method and device, electronic equipment and storage medium |
US11714556B2 (en) * | 2021-09-14 | 2023-08-01 | quadric.io, Inc. | Systems and methods for accelerating memory transfers and computation efficiency using a computation-informed partitioning of an on-chip data buffer and implementing computation-aware data transfer operations to the on-chip data buffer |
CN116382599B (en) * | 2023-06-07 | 2023-08-29 | 之江实验室 | Distributed cluster-oriented task execution method, device, medium and equipment |
Family Cites Families (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4760604A (en) * | 1985-02-15 | 1988-07-26 | Nestor, Inc. | Parallel, multi-unit, adaptive, nonlinear pattern class separator and identifier |
US4870568A (en) * | 1986-06-25 | 1989-09-26 | Thinking Machines Corporation | Method for searching a database system including parallel processors |
US4876643A (en) * | 1987-06-24 | 1989-10-24 | Kabushiki Kaisha Toshiba | Parallel searching system having a master processor for controlling plural slave processors for independently processing respective search requests |
JPS647231A (en) * | 1987-06-30 | 1989-01-11 | Toshiba Corp | Parallel processing device for object directional system |
US4975975A (en) * | 1988-05-26 | 1990-12-04 | Gtx Corporation | Hierarchical parametric apparatus and method for recognizing drawn characters |
US5179683A (en) * | 1988-06-14 | 1993-01-12 | Hitachi, Ltd. | Retrieval apparatus including a plurality of retrieval units |
US5095443A (en) * | 1988-10-07 | 1992-03-10 | Ricoh Company, Ltd. | Plural neural network system having a successive approximation learning method |
JPH02236668A (en) * | 1989-03-10 | 1990-09-19 | Hitachi Ltd | Input/output processing method |
JP2940933B2 (en) * | 1989-05-20 | 1999-08-25 | 株式会社リコー | Pattern recognition method |
US5537593A (en) * | 1990-02-12 | 1996-07-16 | Fmc Corporation | Method for solving enumerative search problems using message passing on parallel computers |
US5428783A (en) * | 1990-11-28 | 1995-06-27 | Motorola, Inc. | Lan based loosely coupled large grain parallel processing method |
US5239594A (en) * | 1991-02-12 | 1993-08-24 | Mitsubishi Denki Kabushiki Kaisha | Self-organizing pattern classification neural network system |
US5307485A (en) * | 1991-05-31 | 1994-04-26 | International Business Machines Corporation | Method and apparatus for merging sorted lists in a multiprocessor shared memory system |
JP3269849B2 (en) * | 1992-05-29 | 2002-04-02 | 株式会社日立製作所 | Parallel database processing system and its retrieval method |
GB9214514D0 (en) * | 1992-07-08 | 1992-08-19 | Massachusetts Inst Technology | Information processing |
US5495606A (en) * | 1993-11-04 | 1996-02-27 | International Business Machines Corporation | System for parallel processing of complex read-only database queries using master and slave central processor complexes |
US5615127A (en) * | 1994-11-30 | 1997-03-25 | International Business Machines Corporation | Parallel execution of a complex task partitioned into a plurality of entities |
US5819021A (en) | 1995-12-11 | 1998-10-06 | Ab Initio Software Corporation | Overpartitioning system and method for increasing checkpoints in component-based parallel applications |
US5712971A (en) | 1995-12-11 | 1998-01-27 | Ab Initio Software Corporation | Methods and systems for reconstructing the state of a computation |
GB9600549D0 (en) * | 1996-01-11 | 1996-03-13 | Lucas Ind Plc | Motor drive control |
US5909681A (en) * | 1996-03-25 | 1999-06-01 | Torrent Systems, Inc. | Computer system and computerized method for partitioning data for parallel processing |
US5966072A (en) | 1996-07-02 | 1999-10-12 | Ab Initio Software Corporation | Executing computations expressed as graphs |
US5940086A (en) * | 1997-01-10 | 1999-08-17 | Hewlett Packard Company | System and method for dynamically allocating data among geometry accelerators in a computer graphics system |
US6088716A (en) | 1997-04-28 | 2000-07-11 | Ab Initio Software Corporation | Method for preventing buffer deadlock in dataflow computations |
US5969726A (en) * | 1997-05-30 | 1999-10-19 | Hewlett-Packard Co. | Caching and coherency control of multiple geometry accelerators in a computer graphics system |
US5897638A (en) | 1997-06-16 | 1999-04-27 | Ab Initio Software Corporation | Parallel virtual file system |
-
1996
- 1996-03-25 US US08/624,844 patent/US5909681A/en not_active Expired - Fee Related
-
1999
- 1999-03-29 US US09/281,984 patent/US6415286B1/en not_active Expired - Fee Related
-
2001
- 2001-11-20 US US09/989,098 patent/US20020083424A1/en not_active Abandoned
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2005348A2 (en) * | 2006-02-14 | 2008-12-24 | Intelliscience Corporation | Methods and systems for data analysis and feature recognition including detection of avian influenza virus |
EP2018617A2 (en) * | 2006-02-14 | 2009-01-28 | Intelliscience Corporation | Methods and system for aggregating and using physical samples and data in a virtual environment |
EP2005348A4 (en) * | 2006-02-14 | 2011-11-09 | Intelliscience Corp | Methods and systems for data analysis and feature recognition including detection of avian influenza virus |
EP2018617A4 (en) * | 2006-02-14 | 2011-11-16 | Intelliscience Corp | Aggregating and using physical samples |
CN104346440A (en) * | 2014-10-10 | 2015-02-11 | 浙江大学 | Neural-network-based cross-media Hash indexing method |
WO2016141282A1 (en) * | 2015-03-04 | 2016-09-09 | The Regents Of The University Of California | Convolutional neural network with tree pooling and tree feature map selection |
Also Published As
Publication number | Publication date |
---|---|
US6415286B1 (en) | 2002-07-02 |
US5909681A (en) | 1999-06-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US6415286B1 (en) | Computer system and computerized method for partitioning data for parallel processing | |
Anghel et al. | Benchmarking and optimization of gradient boosting decision tree algorithms | |
Deng | Interpreting tree ensembles with intrees | |
Banharnsakun | A MapReduce-based artificial bee colony for large-scale data clustering | |
Groër et al. | A parallel algorithm for the vehicle routing problem | |
US9720998B2 (en) | Massive clustering of discrete distributions | |
Shafigh et al. | A linear programming embedded simulated annealing in the design of distributed layout with production planning and systems reconfiguration | |
Boutsinas et al. | On distributing the clustering process | |
Maros et al. | Machine learning for performance prediction of spark cloud applications | |
Masadeh et al. | Grey wolf algorithm for requirements prioritization | |
Soheili et al. | DQPFS: Distributed quadratic programming based feature selection for big data | |
van der Gaast et al. | A deep learning approach for the selection of an order picking system | |
Dai et al. | An improved hybrid Canopy-Fuzzy C-means clustering algorithm based on MapReduce model | |
Poggiali et al. | Quantum clustering with k-means: A hybrid approach | |
Saidi et al. | Feature selection using genetic algorithm for big data | |
Miller et al. | Parallel computation and FASTA: confronting the problem of parallel database search for a fast sequence comparison algorithm | |
Joshi et al. | Parallel algorithms in data mining | |
Tsaregorodtsev | Parallel implementation of back-propagation neural network software on SMP computers | |
Bouaguel et al. | Distributed Evolutionary Feature Selection for Big Data Processing | |
Jurczuk et al. | Accelerating GPU-based evolutionary induction of decision trees-fitness evaluation reuse | |
Sreedharan et al. | 5 Leave-One-Out Validation in Machine Cross-Learning | |
Skvortsov et al. | Approach of Acceleration of Genetic Algorithm on CUDA platform | |
Avila et al. | Efficient In-Situ Quantum Computing Simulation of Shor's and Grover's Algorithms | |
Grefenstette | Robot learning with parallel genetic algorithms on networked computers | |
Liang et al. | Research on the big data mining algorithm based on modified neural network and structure optimized genetic algorithm |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |
|
AS | Assignment |
Owner name: INTERNATIONAL BUSINESS MACHINES CORPORATION, NEW Y Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:TORRENT SYSTEMS, INC.;REEL/FRAME:017555/0173 Effective date: 20051219 Owner name: INTERNATIONAL BUSINESS MACHINES CORPORATION, NEW Y Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:ASCENTIAL SOFTWARE CORPORATION;REEL/FRAME:017555/0184 Effective date: 20051219 |