About 109,000 results
Open links in new tab
  1. What is regularization in plain english? - Cross Validated

    In general that comes with the method you use, if you use SVMs you're doing L2 regularization, if your using LASSO you're doing L1 regularization (see what hairybeast is saying). However, if …

  2. What are Regularities and Regularization? - Cross Validated

    On regularization for neural nets: When adjusting the weights while running the back-propagation algorithm, the regularization term is added to the cost function in the same manner as the …

  3. The origin of the term "regularization" - Cross Validated

    Dec 10, 2016 · Tikhonov, Andrey. "Solution of incorrectly formulated problems and the regularization method." Soviet Math. Dokl.. Vol. 5. 1963. Tikhonov is known for Tikhonov …

  4. When to use regularization methods for regression?

    The lasso can be computed with an algorithm based on coordinate descent as described in the recent paper by Friedman and coll., Regularization Paths for Generalized Linear Models via …

  5. neural networks - L2 Regularization Constant - Cross Validated

    Dec 4, 2017 · $\begingroup$ The number of parameters may affect the regularization cost, but it won't "crush" all the parameters to zero. That is because the derivative of total cost w.r.t. each …

  6. L1 & L2 double role in Regularization and Cost functions?

    Mar 19, 2023 · Regularization is a way of sacrificing the training loss value in order to improve some other facet of performance, a major example being to sacrifice the in-sample fit of a …

  7. What is the meaning of regularization path in LASSO ... - Cross …

    This sequence is the regularization path. * There's also the intercept term $\beta_0$ so all this technically takes place in $(p+1)$-dimensional space, but never mind that. Anyway most …

  8. How does regularization reduce overfitting? - Cross Validated

    Mar 13, 2015 · A common way to reduce overfitting in a machine learning algorithm is to use a regularization term that penalizes large weights (L2) or non-sparse weights (L1) etc. How can …

  9. machine learning - Why use regularisation in polynomial …

    Aug 1, 2016 · Compare, for example, a second-order polynomial without regularization to a fourth-order polynomial with it. The latter can posit big coefficients for the third and fourth powers so …

  10. machine learning - How does regularization work for a Gaussian …

    For the GP classification models, the parameter controlling the latent variable variation in the covariance function ($\sigma_f$) controls the regularization (again see page 144). This …

Refresh