首页期刊视频编委会征稿启事出版道德声明审稿流程读者订阅论文查重联系我们English
引用本文
  • 刘昊天,魏泽,何荣希.分层联邦学习中通信高效层剪枝算法[J].电讯技术,2026,66(2): - .    [点击复制]
  • LIU Haotian,WEI Ze,HE Rongxi.Communication-efficient Layer-wise Pruning Algorithm for Hierarchical Federated Learning[J].,2026,66(2): - .   [点击复制]
【HTML】 【打印本页】 【下载PDF全文】 查看/发表评论下载PDF阅读器关闭

←前一篇|后一篇→

过刊浏览    高级检索

本文已被:浏览 26次   下载 10 本文二维码信息
码上扫一扫!
分层联邦学习中通信高效层剪枝算法
刘昊天,魏泽,何荣希
0
(大连海事大学 信息科学技术学院,辽宁 大连 116026)
摘要:
云-边-端分层联邦学习扩展了云端数据访问范围,增强了模型训练效果,但庞大的网络规模和设备数量加重了通信负担。为了解决这一问题,提出一种固定剪枝率层剪枝算法(Layer-wisePruningwithFixedLayer-preservingRateforcloud-edge-clientHierarchicalFederatedLearning,LP-FLRHFL),在模型参数上传前进行层剪枝以有效压缩模型规模,减少系统开销。在此基础上,考虑不同客户端的模型训练差异,提出一种自适应剪枝率层剪枝算法(Layer-wisePruningwithAdaptiveLayerpreservingRateforcloud-edge-clientHierarchicalFederatedLearning,LP-ALR-HFL),可根据模型准确度实时调控模型修剪率,有效克服非独立同分布数据对剪枝效果的影响,更好适应模型变化。仿真结果表明,在保证模型精度可控的前提下,LP-FLR-HFL算法较基准算法最大可降低56.06%的系统时延和48.88%的能耗,LP-ALR-HFL算法在保持LP-FLR-HFL时延与能耗优化效果的同时,模型精度最高可提升4.71%。
关键词:  分层联邦学习  模型剪枝  层剪枝  通信效率
DOI:10.20079/j.issn.1001-893x.241106003
基金项目:国家自然科学基金资助项目(62371085);中央高校基本科研业务费专项资金资助(3132023514)
Communication-efficient Layer-wise Pruning Algorithm for Hierarchical Federated Learning
LIU Haotian,WEI Ze,HE Rongxi
(College of Information Science and Technology,Dalian Maritime University,Dalian 116026,China)
Abstract:
The cloud-edge-client hierarchical federated learning expands the scope of cloud data access and enhances model training effectiveness, but the large network size and number of devices increase communication burden. To solve this problem,a layer-wise pruning algorithm with fixed layer-preserving rate(LP-FLR-HFL) is proposed to perform model pruning before uploading model parameters,effectively compressing the model size and lowering system overhead. Building on this and taking into account the differences in model training between clients,a layer-wise pruning algorithm with adaptive layer-preserving rate(LP-ALR-HFL) is proposed. This algorithm can adjust the layer-preserving rate in real-time based on model accuracy,effectively mitigating the impact of non-independent and identically distributed data on pruning performance and improving model adaptation. The simulation results show that the LP-FLR-HFL algorithm reduces system latency by up to 56. 06% and energy consumption by 48. 88% when compared with the baseline method while maintaining controllable model accuracy,and the LP-ALR-HFL algorithm improves model accuracy by up to 4. 71% while maintaining the latency and energy optimization advantages of LP-FLR-HFL.
Key words:  hierarchical federated learning  model pruning  layer-wise pruning  communication efficiency
安全联盟站长平台