On accelerated gradient approximation for least square regression with L1-regularization

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

In this paper, we consider an online least square regression problem where the objective function is composed of a quadratic loss function and an L1 regularization on model parameter. For each training sample, we propose to approximate the L1 regularization by a convex function. This results in an overall convex approximation to the original objective function. We apply an efficient accelerated stochastic approximation algorithm to solve the approximation. The developed algorithm does not need to store previous samples which reduces the space complexity. We further prove that the developed algorithm is guaranteed to converge to the global optimum with a convergence rate O (ln n/sqrtn) where n is the number of training samples. The proof is based on a weaker assumption than those applied in similar research work.

Original languageEnglish
Title of host publicationProceedings - 2015 IEEE Symposium Series on Computational Intelligence, SSCI 2015
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages1569-1575
Number of pages7
ISBN (Electronic)9781479975600
DOIs
StatePublished - 2015
EventIEEE Symposium Series on Computational Intelligence, SSCI 2015 - Cape Town, South Africa
Duration: 8 Dec 201510 Dec 2015

Publication series

NameProceedings - 2015 IEEE Symposium Series on Computational Intelligence, SSCI 2015

Conference

ConferenceIEEE Symposium Series on Computational Intelligence, SSCI 2015
Country/TerritorySouth Africa
CityCape Town
Period8/12/1510/12/15

Fingerprint

Dive into the research topics of 'On accelerated gradient approximation for least square regression with L1-regularization'. Together they form a unique fingerprint.

Cite this