Publication Type

Conference Proceeding Article

Version

acceptedVersion

Publication Date

4-2011

Abstract

Multiple Kernel Learning (MKL) aims to learn kernel machines for solving a real machine learning problem (e.g. classification) by exploring the combinations of multiple kernels. The traditional MKL approach is in general “shallow” in the sense that the target kernel is simply a linear (or convex) combination of some base kernels. In this paper, we investigate a framework of Multi-Layer Multiple Kernel Learning (MLMKL) that aims to learn “deep” kernel machines by exploring the combinations of multiple kernels in a multi-layer structure, which goes beyond the conventional MKL approach. Through a multiple layer mapping, the proposed MLMKL framework offers higher flexibility than the regular MKL for finding the optimal kernel for applications. As the first attempt to this new MKL framework, we present a two-Layer Multiple Kernel Learning (2LMKL) method together with two efficient algorithms for classification tasks. We analyze their generalization performances and have conducted an extensive set of experiments over 16 benchmark datasets, in which encouraging results showed that our method outperformed the conventional MKL methods.

Keywords

Benchmark datasets, Classification tasks, Generalization performance, Kernel machine, Machine learning problem, Multilayer structures

Discipline

Computer Sciences | Databases and Information Systems | Theory and Algorithms

Research Areas

Data Science and Engineering

Publication

JMLR Workshop & Conference Proceedings: 14th International Conference on Artificial Intelligence and Statistics, AISTATS 2011, April 11-13, Fort Lauderdale, FL

Volume

15

First Page

909

Last Page

917

ISSN

1532-4435

Publisher

JMLR

City or Country

Cambridge, MA

Copyright Owner and License

Authors

Additional URL

http://proceedings.mlr.press/v15/zhuang11a/zhuang11a.pdf

Share

COinS