Abstract
Most optimization methods for logistic regression or maximum entropy solve the primal problem. They range from iterative scaling,
coordinate descent, quasi-Newton, and truncated Newton. Less efforts have been made to solve the dual problem. In contrast,
for linear support vector machines (SVM), methods have been shown to be very effective for solving the dual problem. In this paper, we apply coordinate descent methods
to solve the dual form of logistic regression and maximum entropy. Interestingly, many details are different from the situation
in linear SVM. We carefully study the theoretical convergence as well as numerical issues. The proposed method is shown to be faster than
most state of the art methods for training logistic regression and maximum entropy.
coordinate descent, quasi-Newton, and truncated Newton. Less efforts have been made to solve the dual problem. In contrast,
for linear support vector machines (SVM), methods have been shown to be very effective for solving the dual problem. In this paper, we apply coordinate descent methods
to solve the dual form of logistic regression and maximum entropy. Interestingly, many details are different from the situation
in linear SVM. We carefully study the theoretical convergence as well as numerical issues. The proposed method is shown to be faster than
most state of the art methods for training logistic regression and maximum entropy.
- Content Type Journal Article
- DOI 10.1007/s10994-010-5221-8
- Authors
- Hsiang-Fu Yu, Department of Computer Science, National Taiwan University, Taipei, 106 Taiwan
- Fang-Lan Huang, Department of Computer Science, National Taiwan University, Taipei, 106 Taiwan
- Chih-Jen Lin, Department of Computer Science, National Taiwan University, Taipei, 106 Taiwan
- Journal Machine Learning
- Online ISSN 1573-0565
- Print ISSN 0885-6125
No hay comentarios:
Publicar un comentario
Nota: solo los miembros de este blog pueden publicar comentarios.