Sharpness aware minimization

Sharpness Aware Minimization (SAM) is an optimization algorithm used in machine learning that aims to improve model generalization. The method seeks to find model parameters that are located in regions of the loss landscape with uniformly low loss values, rather than parameters that only achieve a minimal loss value at a single point. This approach is described as finding "flat" minima instead of "sharp" ones. The rationale is that models trained this way are less sensitive to variations between training and test data, which can lead to better performance on unseen data.[1]

The algorithm was introduced in a 2020 paper by a team of researchers including Pierre Foret, Ariel Kleiner, Hossein Mobahi, and Behnam Neyshabur.[1]

  1. ^ a b Foret, Pierre; Kleiner, Ariel; Mobahi, Hossein; Neyshabur, Behnam (2021). "Sharpness-Aware Minimization for Efficiently Improving Generalization". International Conference on Learning Representations (ICLR) 2021. arXiv:2010.01412.

© MMXXIII Rich X Search. We shall prevail. All rights reserved. Rich X Search