Hyperparameter optimization

In machine learning, hyperparameter optimization[1] or tuning is the problem of choosing a set of optimal hyperparameters for a learning algorithm. A hyperparameter is a parameter whose value is used to control the learning process.

Hyperparameter optimization finds a tuple of hyperparameters that yields an optimal model which minimizes a predefined loss function on given independent data.[2] The objective function takes a tuple of hyperparameters and returns the associated loss.[2] Cross-validation is often used to estimate this generalization performance, and therefore choose the set of values for hyperparameters that maximize it.[3]

  1. ^ Matthias Feurer and Frank Hutter. Hyperparameter optimization. In: AutoML: Methods, Systems, Challenges, pages 3–38.
  2. ^ a b Claesen, Marc; Bart De Moor (2015). "Hyperparameter Search in Machine Learning". arXiv:1502.02127 [cs.LG].
  3. ^ Bergstra, James; Bengio, Yoshua (2012). "Random Search for Hyper-Parameter Optimization" (PDF). Journal of Machine Learning Research. 13: 281–305.

© MMXXIII Rich X Search. We shall prevail. All rights reserved. Rich X Search