A Stochastic Non-Monotonic Active Learning Algorithm Based on Active Learning – The problem of active learning is of great interest in computer vision, in particular for learning algorithms with non-monotonic active learning (NMAL) for object detection and tracking. We present an approach to solving the active learning problem based on the nonmonotonic active learning problem, namely, the learning algorithm as a nonmonotonic constraint satisfaction problem. We propose a monotonic active learning algorithm, termed monotonic non-monotonic constraint satisfiability (MN-SAT). MN-SAT requires that the constraint satisfaction problems are linear in the time of solving. This allows us to scale the learning algorithm to a large number of feasible nonmonotonic constraints even when the number of constraint satisfifies is high. By proposing a monotonic solver, we demonstrate the flexibility in practical implementations for MN-SAT on a real-world supervised classification problem. We also provide an interactive proof system to demonstrate the usefulness of the proposed monotonic approach for solving MN-SAT.
In this paper we propose a new, simple yet powerful technique for robustly extracting features from latent vector data. Inspired by the use of vector representations to characterize information in latent vectors as well as the use of the latent normality information in feature extraction, we formulate the problem as a linear classifier which selects features from vector space as well as the vector space of features. We consider sparse vector-space learning and prove the theoretical consistency of our method. We present a theoretical analysis in which the maximum likelihood estimation is based on the posterior distribution of sparse features. We observe that most sparse features are non-negative and thus robust and we can recover the sparse features accurately without missing a significant fraction of the feature space. We show that the resulting classifiers achieve state-of-the-art performance in terms of prediction accuracy and classification speed. We present a simulation result of this method and we show that the new class nets can outperform other existing models based on sparse features in terms of both classification speed and classification time.
A Comparative Analysis of Support Vector Machines
Learning to Predict Saccadic Charts from Data Captions
A Stochastic Non-Monotonic Active Learning Algorithm Based on Active Learning
Learning from non-deterministic examples
Fast and Robust Metric Selection via Robust Regularization Under Matrix KernelsIn this paper we propose a new, simple yet powerful technique for robustly extracting features from latent vector data. Inspired by the use of vector representations to characterize information in latent vectors as well as the use of the latent normality information in feature extraction, we formulate the problem as a linear classifier which selects features from vector space as well as the vector space of features. We consider sparse vector-space learning and prove the theoretical consistency of our method. We present a theoretical analysis in which the maximum likelihood estimation is based on the posterior distribution of sparse features. We observe that most sparse features are non-negative and thus robust and we can recover the sparse features accurately without missing a significant fraction of the feature space. We show that the resulting classifiers achieve state-of-the-art performance in terms of prediction accuracy and classification speed. We present a simulation result of this method and we show that the new class nets can outperform other existing models based on sparse features in terms of both classification speed and classification time.
Leave a Reply