- This event has passed.
Applied Math Seminar – Dengfeng Sun, Purdue University
April 26 @ 11:00 am - 12:00 pm
Title: Improving the Convergence Rate of the Distributed Gradient Descent Method
Abstract: This talk presents our recent work on the accelerated Distributed Gradient Descent (DGD) method for distributed optimization problems. We observed that the inexact convergence of the DGD algorithm can be caused by the inaccuracy in the consensus procedure in a distributed optimization setting. Motivated by this observation, we try to develop a sufficiently accurate consensus method in order for a better convergence result. In our work, it is shown that the accuracy of the solution in distributed optimization can be controlled by a predefined error bound sequence, and that one can achieve exact convergence when the predefined sequence decays to zero. Furthermore, it is shown that a linear convergence can be obtained when the sequence has a linear decaying rate. Due to the flexibility of the choice for both the consensus error bounds and the consensus schemes, our method offers potential opportunity to balance the communication and computation costs in distributed optimization problems. This work potentially benefits some engineering practice such as formation control and swarm control of UAV networks.