icml12

icml 2000 论文列表

Proceedings of the Seventeenth International Conference on Machine Learning (ICML 2000), Stanford University, Stanford, CA, USA, June 29 - July 2, 2000.

Crafting Papers on Machine Learning.
Induction of Concept Hierarchies from Noisy Data.
Improving Short-Text Classification using Unlabeled Data for Classification Problems.
Linear Discriminant Trees.
Combining Multiple Learning Strategies for Effective Cross Validation.
The Effect of the Input Density Distribution on Kernel-based Classifiers.
Multi-Agent Reinforcement Leraning for Traffic Light Control.
Classification with Multiple Latent Variable Models using Maximum Entropy Discrimination.
Lightweight Rule Induction.
Enhancing the Plausibility of Law Equation Discovery.
Solving the Multiple-Instance Problem: A Lazy Learning Approach.
Using Natural Language Processing and discourse Features to Identify Understanding Errors.
Clustering with Instance-level Constraints.
Discovering Homogeneous Regions in Spatial Data through Competition.
A Quantification of Distance Bias Between Evaluation Metrics In Classification.
Locally Weighted Projection Regression: Incremental Real Time Learning in High Dimensional Space.
An Evolutionary Approach to Evidence-Based Learning of Deterministic Finite Automata.
Bootstrapping Syntax and Recursion using Alginment-Based Learning.
Unpacking Multi-valued Symbolic Features and Classes in Memory-Based Language Learning.
Model Selection Criteria for Learning Belief Nets: An Empirical Comparison.
Hierarchical Unsupervised Learning.
Learning Priorities From Noisy Examples.
Local Expert Autoassociators for Anomaly Detection.
Mutual Information in Learning Feature Transformations.
Partial Linear Trees.
Support Vector Machine Active Learning with Application sto Text Classification.
Discovering the Structure of Partial Differential Equations from Example Behaviour.
A Comparative Study of Cost-Sensitive Boosting Algorithms.
Probabilistic DFA Inference using Kullback-Leibler Divergence and Minimality.
Selection of Support Vector Kernel Parameters for Improved Generalization.
Efficient Learning Through Evolution: Neural Programming and Internal Reinforcement.
Feature Selection and Incremental Learning of Probabilistic Concept Hierarchies.
A Bayesian Framework for Reinforcement Learning.
TPOT-RL Applied to Network Routing.
Multi-agent Q-learning and Regression Trees for Automated Pricing Decisions.
Using Learning by Discovery to Segment Remotely Sensed Images.
Sparse Greedy Matrix Approximation for Machine Learning.
Practical Reinforcement Learning in Continuous Spaces.
Discovering Test Set Regularities in Relational Domains.
Learning to Predict Performance from Formula Modeling and Training Data.
Obtaining Simplified Rule Bases by Hybrid Learning.
Using Knowledge to Speed Learning: A Comparison of Knowledge-based Cascade-correlation and Multi-task Learning.
Incremental Learning in SwiftFile.
Instance Pruning as an Information Preserving Problem.
An Adaptive Regularization Criterion for Supervised Learning.
Less is More: Active Learning with Support Vector Machines.
Predicting the Generalization Performance of Cross Validatory Model Selection Criteria.
Achieving Efficient and Cognitively Plausible Learning in Backgammon.
Direct Bayes Point Machines.
Learning to Fly: An Application of Hierarchical Reinforcement Learning.
Image Color Constancy Using EM and Cached Statistics.
Knowledge Propagation in Model-based Reinforcement Learning Tasks.
Adaptive Resolution Model-Free Reinforcement Learning: Decision Boundary Partitioning.
Combining Reinforcement Learning with a Local Control Algorithm.
Shaping in Reinforcement Learning by Changing the Physics of the Problem.
Eligibility Traces for Off-Policy Policy Evaluation.
Constructive Feature Learning and the Development of Visual Expertise.
Meta-Learning by Landmarking Various Learning Algorithms.
A Normative Examination of Ensemble Learning Algorithms.
X-means: Extending K-means with Efficient Estimation of the Number of Clusters.
Clustering the Users of Large Web Sites into Communities.
Learning Distributed Representations by Mapping Concepts and Relations into a Linear Space.
FeatureBoost: A Meta-Learning Algorithm that Improves Model Robustness.
Generalized Average-Case Analyses of the Nearest Neighbor Algorithm.
Comparing the Minimum Description Length Principle and Boosting in the Automatic Analysis of Discourse.
An Approach to Data Reduction and Clustering with Theoretical Guarantees.
Learning Probabilistic Models for Decision-Theoretic Navigation of Mobile Robots.
Algorithms for Inverse Reinforcement Learning.
A Boosting Approach to Topic Spotting on Subdialogues.
Rates of Convergence for Variable Resolution Schemes in Optimal Control.
Complete Cross-Validation for Nearest Neighbor Classifiers.
Learning Chomsky-like Grammars for Biological Sequence Families.
Acquisition of Stand-up Behavior by a Real Robot using Hierarchical Reinforcement Learning.
Machine Learning for Subproblem Selection.
"Boosting'' a Positive-Data-Only Learner.
Mixtures of Factor Analyzers.
Maximum Entropy Markov Models for Information Extraction and Segmentation.
Bootstrap Methods for the Cost-Sensitive Evaluation of Classifiers.
Efficient Mining from Large Databases by Query Learning.
An Initial Study of an Adaptive Hierarchical Vision System.
Selective Voting for Perception-like Online Learning.
The Space of Jumping Emerging Patterns and Its Incremental Maintenance Algorithms.
A Bayesian Approach to Temporal Data Clustering using Hidden Markov Models.
An Algorithm for Distributed Reinforcement Learning in Cooperative Multi-Agent Systems.
Version Space Algebra and its Application to Programming by Demonstration.
Data Reduction Techniques for Instance-Based Learning from Human/Computer Interface Data.
Algorithm Selection using Reinforcement Learning.
Voting Nearest-Neighbor Subclassifiers.
A Dynamic Adaptation of AD-trees for Efficient Machine Learning on Large Data Sets.
Detecting Concept Drift with Support Vector Machines.
Learning Bayesian Networks for Diverse and Varying numbers of Evidence Sets.
Learning Horn Expressions with LogAn-H.
Pseudo-convergent Q-Learning by Competitive Pricebots.
MultiStage Cascading of Multiple Classifiers: One Man's Noise is Another Man's Data.
A Universal Generalization for Temporal-Difference Learning Using Haar Basis Functions.
State-based Classification of Finger Gestures from Electromyographic Signals.
Estimating the Generalization Performance of an SVM Efficiently.
Approximate Dimension Equalization in Vector-based Information Retrieval.
Learning Declarative Control Rules for Constraint-BAsed Planning.
Experimental Results on Q-Learning for General-Sum Stochastic Games.
Why Discretization Works for Naive Bayesian Classifiers.
Data as Ensembles of Records: Representation and Comparison.
An Integrated Connectionist Approach to Reinforcement Learning for Robotic Control.
Meta-Learning for Phonemic Annotation of Corpora.
Empirical Bayes for Learning to Learn.
Correlation-based Feature Selection for Discrete and Numeric Class Machine Learning.
Learning Curved Multinomial Subfamilies for Natural Language Processing and Information Retrieval.
Localizing Policy Gradient Estimates to Action Transition.
Learning Filaments.
Enhancing Supervised Learning with Unlabeled Data.
Learning Multiple Models for Reward Maximization.
Analyzing Relational Learning in the Phase Transition Framework.
Using Error-Correcting Codes for Text Classification.
Relative Loss Bounds for Temporal-Difference Learning.
Learning Subjective Functions with Large Margins.
Online Ensemble Learning: An Empirical Study.
Bounds on the Generalization Performance of Kernel Machine Ensembles.
Ideal Theory Refinement under Object Identity.
Anomaly Detection over Noisy Data using Learned Probability Distributions.
Feature Subset Selection and Order Identification for Unsupervised Learning.
Exploiting the Cost (In)sensitivity of Decision Tree Splitting Criteria.
A Unifeid Bias-Variance Decomposition and its Applications.
Bayesian Averaging of Classifiers and the Overfitting Problem.
Hidden Strengths and Limitations: An Empirical Investigation of Reinforcement Learning.
Fixed Points of Approximate Value Iteration and Temporal-Difference Learning.
Using Multiple Levels of Learning and Diverse Evidence to Uncover Coordinately Controlled Genes.
On-line Learning for Humanoid Robot Systems.
Automatic Identification of Mathematical Concepts.
Discriminative Reranking for Natural Language Parsing.
Learning to Probabilistically Identify Authoritative Documents.
Automatically Extracting Features for Concept Learning from the Web.
Learning in Non-stationary Conditions: A Control Theoretic Approach.
A Divide and Conquer Approach to Learning from Prior Knowledge.
Learning to Select Text Databases with Neural Nets.
Learning to Create Customized Authority Lists.
Dimension Reduction Techniques for Training Polynomial Networks.
Query Learning with Large Margin Classifiers.
Challenges of the Email Domain for Text Classification.
Finding Variational Structure in Data by Cross-Entropy Optimization.
Convergence Problems of General-Sum Multiagent Reinforcement Learning.
Classification of Individuals with Complex Structure.
Disciple-COA: From Agent Programming to Agent Teaching.
A Column Generation Algorithm For Boosting.
Duality and Geometry in SVM Classifiers.
Characterizing Model Erros and Differences.
Reinforcement Learning in POMDP's via Direct Gradient Ascent.
Combining Multiple Perspectives.
Behavioral Cloning of Student Pilots with Modular Neural Networks.
A Nonparametric Approach to Noisy and Costly Optimization.
Reducing Multiclass to Binary: A Unifying Approach for Margin Classifiers.
Knowledge Representation Issues in Control Knowledge Learning.