Robust Soft-Entropy Neural Network Trees George H. John Computer Science Department Stanford University Stanford, CA 94305 gjohn@cs.Stanford.EDU We present a new method for the induction of tree-structured recursive partitioning classifiers that use a neural network as the partitioning function at each node in the tree. Our technique is appropriate for pattern recognition tasks with many continuous inputs and a single multivalued nominal output. This paper presents two main contributions: 1) a novel objective function called {\em soft entropy}, which is used to train each neural net to give the optimal partitioning of the data, and 2) a novel but simple method for removing outliers called {\em iterative re-filtering}, which boosts performance on many datasets. These two ideas are presented in the context of a single learning system called SENNT-PIRE (Soft Entropy Neural Net Trees with Pruning and Iterative RE-filtering). Submitted to Advances in Neural Information Processing Systems 7