Large-scale learning with AdaGrad on Spark

Stochastic Gradient Descent (SGD) is a simple yet very efficient online learning algorithm for optimizing convex (and often non-convex) functions and one of the most popular stochastic optimization methods in machine learning today. One drawback of SGD is that it is sensitive to the learning rate hy...

Full description

Saved in:
Bibliographic Details
Published in:2015 IEEE International Conference on Big Data (Big Data) pp. 2828 - 2830
Main Authors: Hadgu, Asmelash Teka, Nigam, Aastha, Diaz-Aviles, Ernesto
Format: Conference Proceeding
Language:English
Published: IEEE 01.10.2015
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Be the first to leave a comment!
You must be logged in first