Communication-Efficient Federated Learning via Clipped Uniform Quantization

Zavareh Bozorgasl, Hao Chen

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

This paper presents a novel approach to enhance communication efficiency in federated learning through clipped uniform quantization. By leveraging optimal clipping thresholds and client-specific adaptive quantization schemes, the proposed method significantly reduces bandwidth and memory requirements for model weight transmission between clients and the server while maintaining competitive accuracy. We investigate the effects of symmetric clipping and uniform quantization on model performance, emphasizing the role of stochastic quantization in mitigating artifacts and improving robustness. Extensive simulations demonstrate that the method achieves near-full-precision performance with substantial communication and memory savings. Moreover, the proposed approach facilitates efficient weight averaging based on the inverse of the mean squared quantization errors, effectively balancing the trade-off between communication efficiency and model accuracy. Indeed, in contrast to federated averaging, this design obviates the need to disclose client-specific data volumes to the server, thereby enhancing client privacy. Comparative analysis with conventional quantization methods further confirms the efficacy of the proposed scheme.

Original languageEnglish
Title of host publication2025 59th Annual Conference on Information Sciences and Systems, CISS 2025
PublisherInstitute of Electrical and Electronics Engineers Inc.
ISBN (Electronic)9798331513269
DOIs
StatePublished - 2025
Event59th Annual Conference on Information Sciences and Systems, CISS 2025 - Baltimore, United States
Duration: 19 Mar 202521 Mar 2025

Publication series

Name2025 59th Annual Conference on Information Sciences and Systems, CISS 2025

Conference

Conference59th Annual Conference on Information Sciences and Systems, CISS 2025
Country/TerritoryUnited States
CityBaltimore
Period19/03/2521/03/25

Keywords

  • Deterministic Quantization
  • Distributed Learning
  • Federated Learning
  • Optimally Clipped Tensors And Vectors (OCTAV)
  • Quantization Aware Training (QAT)
  • Stochastic Quantization

Fingerprint

Dive into the research topics of 'Communication-Efficient Federated Learning via Clipped Uniform Quantization'. Together they form a unique fingerprint.

Cite this