Deep Bilevel Learning

Jenni, Simon; Favaro, Paolo (September 2018). Deep Bilevel Learning. In: European Conference on Computer Vision 2018. Munich, Germany. Sep. 8 - Sep. 14, 2018.

[img] Text
Simon_Jenni_Deep_Bilevel_Learning_ECCV_2018_paper.pdf - Published Version
Restricted to registered users only
Available under License Publisher holds Copyright.

Download (2MB)

We present a novel regularization approach to train neural networks that enjoys better generalization and test error than standard stochastic gradient descent. Our approach is based on the principles of cross-validation, where a validation set is used to limit the model overfitting. We formulate such principles as a bilevel optimization problem. This formulation allows us to define the optimization of a cost on the validation set subject to another optimization on the training set. The overfitting is controlled by introducing weights on each mini-batch in the training set and by choosing their values so that they minimize the error on the validation set. In practice, these weights define mini-batch learning rates in a gradient descent update equation that favor gradients with better generalization capabilities. Because of its simplicity, this approach can be integrated with other regularization methods and training schemes. We evaluate extensively our proposed algorithm on several neural network architectures and datasets, and find that it consistently improves the generalization of the model, especially when labels are noisy.

Item Type:

Conference or Workshop Item (Paper)

Division/Institute:

08 Faculty of Science > Institute of Computer Science (INF)

UniBE Contributor:

Jenni, Simon, Favaro, Paolo

Subjects:

000 Computer science, knowledge & systems
500 Science > 510 Mathematics

Language:

English

Submitter:

Xiaochen Wang

Date Deposited:

28 May 2019 15:32

Last Modified:

05 Dec 2022 15:26

BORIS DOI:

10.7892/boris.126514

URI:

https://boris.unibe.ch/id/eprint/126514

Actions (login required)

Edit item Edit item
Provide Feedback