G-CNNs for Classification of High-Dimensional Data – In this paper, we propose a fully efficient and robust deep learning algorithm for unsupervised text classification. The learning is done using a CNN-based model, which uses information to model the semantic relations between text words and classify them. In order to learn the semantic relation between text words and classify them, the CNN-based model must first learn the semantic relationship between word-level and word-level features, which may not be available in both the word embedding and model. As a result, we have to rely on a few different word embedding features, which we call the word-level feature, and a more discriminative one to classify the text word with low-level information. We validate our model on unsupervised Chinese text classification datasets and on publicly available Chinese word graph. The model achieves comparable or comparable accuracy to state-of-the-art baselines for both unsupervised and supervised classification, especially when it is coupled with fast inference.
State-of-the-art algorithms for sparse coding and regression have been based on discrete and continuous distributions over the data. To address the computational issues associated with learning the structure of these components directly, we take a deep-learning perspective towards supervised learning. We propose to encode the data into discrete and continuous regularization functions by taking a deep-learning approach by using a neural network to encode the feature vectors. We formulate a general framework and use it to develop a novel sparse coding and regression formulation which is particularly suitable for practical applications on high-dimensional data. We evaluate our framework on both synthetic data and real-world datasets and demonstrate that our method beats the state-of-the-art in both training and test time for both challenging data set.
Evaluating the Ability of a Decision Tree to Perform its Qualitative Negotiation TD(FP) Method
A Minimax Stochastic Loss Benchmark
G-CNNs for Classification of High-Dimensional Data
Joint Learning of Cross-Modal Attribute and Semantic Representation for Action Recognition
Efficient Geodesic Regularization on Graphs and Applications to Deep Learning Neural NetworksState-of-the-art algorithms for sparse coding and regression have been based on discrete and continuous distributions over the data. To address the computational issues associated with learning the structure of these components directly, we take a deep-learning perspective towards supervised learning. We propose to encode the data into discrete and continuous regularization functions by taking a deep-learning approach by using a neural network to encode the feature vectors. We formulate a general framework and use it to develop a novel sparse coding and regression formulation which is particularly suitable for practical applications on high-dimensional data. We evaluate our framework on both synthetic data and real-world datasets and demonstrate that our method beats the state-of-the-art in both training and test time for both challenging data set.
Leave a Reply