| 摘要: |
| 云-边-端分层联邦学习扩展了云端数据访问范围,增强了模型训练效果,但庞大的网络规模和设备数量加重了通信负担。为了解决这一问题,提出一种固定剪枝率层剪枝算法(Layer-wisePruningwithFixedLayer-preservingRateforcloud-edge-clientHierarchicalFederatedLearning,LP-FLRHFL),在模型参数上传前进行层剪枝以有效压缩模型规模,减少系统开销。在此基础上,考虑不同客户端的模型训练差异,提出一种自适应剪枝率层剪枝算法(Layer-wisePruningwithAdaptiveLayerpreservingRateforcloud-edge-clientHierarchicalFederatedLearning,LP-ALR-HFL),可根据模型准确度实时调控模型修剪率,有效克服非独立同分布数据对剪枝效果的影响,更好适应模型变化。仿真结果表明,在保证模型精度可控的前提下,LP-FLR-HFL算法较基准算法最大可降低56.06%的系统时延和48.88%的能耗,LP-ALR-HFL算法在保持LP-FLR-HFL时延与能耗优化效果的同时,模型精度最高可提升4.71%。 |
| 关键词: 分层联邦学习 模型剪枝 层剪枝 通信效率 |
| DOI:10.20079/j.issn.1001-893x.241106003 |
|
| 基金项目:国家自然科学基金资助项目(62371085);中央高校基本科研业务费专项资金资助(3132023514) |
|
| Communication-efficient Layer-wise Pruning Algorithm for Hierarchical Federated Learning |
| LIU Haotian,WEI Ze,HE Rongxi |
| (College of Information Science and Technology,Dalian Maritime University,Dalian 116026,China) |
| Abstract: |
| The cloud-edge-client hierarchical federated learning expands the scope of cloud data access and enhances model training effectiveness, but the large network size and number of devices increase communication burden. To solve this problem,a layer-wise pruning algorithm with fixed layer-preserving rate(LP-FLR-HFL) is proposed to perform model pruning before uploading model parameters,effectively compressing the model size and lowering system overhead. Building on this and taking into account the differences in model training between clients,a layer-wise pruning algorithm with adaptive layer-preserving rate(LP-ALR-HFL) is proposed. This algorithm can adjust the layer-preserving rate in real-time based on model accuracy,effectively mitigating the impact of non-independent and identically distributed data on pruning performance and improving model adaptation. The simulation results show that the LP-FLR-HFL algorithm reduces system latency by up to 56. 06% and energy consumption by 48. 88% when compared with the baseline method while maintaining controllable model accuracy,and the LP-ALR-HFL algorithm improves model accuracy by up to 4. 71% while maintaining the latency and energy optimization advantages of LP-FLR-HFL. |
| Key words: hierarchical federated learning model pruning layer-wise pruning communication efficiency |