Regularization refers to a set of techniques used to avoid overfitting. It ensures that the function computed is no more curved than necessary For example: This is achieved by adding a penalty to the error function. It is used for solving ill-conditioned parameter-estimation problems. Typical examples of regularization methods include Tikhonov Regularization, lasso, and L2-norm in SVM's. These techniques are also used for model selection, where they work by either implicitly or explicitly penalizing models based on the number of their parameters. Reference

Alumni Liaison

Basic linear algebra uncovers and clarifies very important geometry and algebra.

Dr. Paul Garrett