TY - GEN
T1 - Communication-Efficient Federated Learning via Clipped Uniform Quantization
AU - Bozorgasl, Zavareh
AU - Chen, Hao
N1 - Publisher Copyright:
© 2025 IEEE.
PY - 2025
Y1 - 2025
N2 - This paper presents a novel approach to enhance communication efficiency in federated learning through clipped uniform quantization. By leveraging optimal clipping thresholds and client-specific adaptive quantization schemes, the proposed method significantly reduces bandwidth and memory requirements for model weight transmission between clients and the server while maintaining competitive accuracy. We investigate the effects of symmetric clipping and uniform quantization on model performance, emphasizing the role of stochastic quantization in mitigating artifacts and improving robustness. Extensive simulations demonstrate that the method achieves near-full-precision performance with substantial communication and memory savings. Moreover, the proposed approach facilitates efficient weight averaging based on the inverse of the mean squared quantization errors, effectively balancing the trade-off between communication efficiency and model accuracy. Indeed, in contrast to federated averaging, this design obviates the need to disclose client-specific data volumes to the server, thereby enhancing client privacy. Comparative analysis with conventional quantization methods further confirms the efficacy of the proposed scheme.
AB - This paper presents a novel approach to enhance communication efficiency in federated learning through clipped uniform quantization. By leveraging optimal clipping thresholds and client-specific adaptive quantization schemes, the proposed method significantly reduces bandwidth and memory requirements for model weight transmission between clients and the server while maintaining competitive accuracy. We investigate the effects of symmetric clipping and uniform quantization on model performance, emphasizing the role of stochastic quantization in mitigating artifacts and improving robustness. Extensive simulations demonstrate that the method achieves near-full-precision performance with substantial communication and memory savings. Moreover, the proposed approach facilitates efficient weight averaging based on the inverse of the mean squared quantization errors, effectively balancing the trade-off between communication efficiency and model accuracy. Indeed, in contrast to federated averaging, this design obviates the need to disclose client-specific data volumes to the server, thereby enhancing client privacy. Comparative analysis with conventional quantization methods further confirms the efficacy of the proposed scheme.
KW - Deterministic Quantization
KW - Distributed Learning
KW - Federated Learning
KW - Optimally Clipped Tensors And Vectors (OCTAV)
KW - Quantization Aware Training (QAT)
KW - Stochastic Quantization
UR - https://www.scopus.com/pages/publications/105002711018
U2 - 10.1109/CISS64860.2025.10944759
DO - 10.1109/CISS64860.2025.10944759
M3 - Conference contribution
AN - SCOPUS:105002711018
T3 - 2025 59th Annual Conference on Information Sciences and Systems, CISS 2025
BT - 2025 59th Annual Conference on Information Sciences and Systems, CISS 2025
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 59th Annual Conference on Information Sciences and Systems, CISS 2025
Y2 - 19 March 2025 through 21 March 2025
ER -