Adaptive Gradient Compression and Differential Privacy for Resource-Constrained Edge Federated Learning
DOI:
https://doi.org/10.65492/01/401/2026/32Keywords:
Federated Learning, Differential Privacy, Gradient Compression, Edge Computing, Resource Heterogeneity, Privacy-Utility Trade-offAbstract
Federated learning (FL) is an attractive scheme for training machine learning models across distributed edge devices without requiring raw data to be shared. However, deploying FL at the edge of the net- work raises two interrelated issues: (1) the significant communication overhead due to transmitting massive gradient vectors from resource- constrained devices and (2) the risk of privacy leakage via attacks that can invert gradients or infer training samples. Current methods treat gradient compression and differential privacy (DP) as separate concerns, leading to compounded accuracy loss. We propose AGC-DP—Adaptive Gradient Compression (AGC) with Differential Privacy (DP)—a uni- fied framework that achieves efficient communication and strong pri- vacy guarantees across diverse edge FL environments. AGC-DP also introduces a resource-aware client selection algorithm to balance con- tributions from battery level, available memory, and uplink bandwidth; an adaptive compression ratio for each client based on the optimization problem; and a privacy amplification construction that exploits sub- sampling to improve DP accounting. We provide rigorous theoretical analysis, including convergence guarantees under non-i.i.d. data dis- tributions and formal (ϵ, δ)- DP proofs. Comprehensive experiments across the CIFAR-10, HAR, and FEMNIST datasets with up to 100 edge clients show that AGC-DP reduces communication overhead by up to 91.1% compared to vanilla FedAvg, while maintaining accuracy within 1.5% of the vanilla baseline. Our results show that privacy, com- munication efficiency, and model utility can be achieved simultaneously in resource-constrained edge FL.

