• Anglický jazyk

Distributed Machine Learning and Gradient Optimization

Autor: Jiawei Jiang

This book presents the state of the art in distributed machine learning algorithms that are based on gradient optimization methods. In the big data era, large-scale datasets pose enormous challenges for the existing machine learning systems. As such, implementing... Viac o knihe

Na objednávku, dodanie 2-4 týždne

148.49 €

bežná cena: 164.99 €

O knihe

This book presents the state of the art in distributed machine learning algorithms that are based on gradient optimization methods. In the big data era, large-scale datasets pose enormous challenges for the existing machine learning systems. As such, implementing machine learning algorithms in a distributed environment has become a key technology, and recent research has shown gradient-based iterative optimization to be an effective solution. Focusing on methods that can speed up large-scale gradient optimization through both algorithm optimizations and careful system implementations, the book introduces three essential techniques in designing a gradient optimization algorithm to train a distributed machine learning model: parallel strategy, data compression and synchronization protocol.


Written in a tutorial style, it covers a range of topics, from fundamental knowledge to a number of carefully designed algorithms and systems of distributed machine learning. It will appeal to a broad audience in the field of machine learning, artificial intelligence, big data and database management.

  • Vydavateľstvo: Springer Nature Singapore
  • Rok vydania: 2022
  • Formát: Hardback
  • Rozmer: 241 x 160 mm
  • Jazyk: Anglický jazyk
  • ISBN: 9789811634192

Generuje redakčný systém BUXUS CMS spoločnosti ui42.