隐私保护论文在INS期刊发表
来源: 甘文生/
暨南大学
1199
4
0
2023-02-25

隐私保护模式挖掘论文在国际期刊INS在线发表

      课题组关于联邦框架下的隐私保护频繁模式挖掘的论文"Privacy-preserving federated mining of frequent itemsets", 在人工智能等领域的权威期刊Information Sciences (SCI, IF:8.233, JCR Q1, 中科院一区, CCF B) 在线发表, https://doi.org/10.1016/j.ins.2023.01.002 。本文作者包括Yao Chen (2021级研究生)、Wensheng Gan教授 (通讯作者) 、Yongdong Wu教授、美国伊利诺伊大学芝加哥分校的Philip S. Yu教授。暨南大学为论文的第一单位, 该研究得到了国家自然科学青年基金和面上项目、广东省基础与应用基础研究基金、琶洲实验室青年学者项目等资助。 Information Sciences 期刊是计算机科学的人工智能领域具有高影响力的国际学术刊物之一,影响因子为8.233中科院一区,主要发表和报道人工智能、数据科学、机器学习、隐私安全等领域的最新研究进展和技术。 

 

论文题目: Privacy-preserving federated mining of frequent itemsets

文章链接:https://www.sciencedirect.com/science/article/pii/S0020025523000026 

Authors:  Yao Chen (研究生), Wensheng Gan*, Yongdong Wu, and Philip S. Yu

Abstract:    In the growing concerns about data privacy and increasingly stringent data security regulations, it is not feasible to directly mine data or share data if the dataset contains private data. Collecting and analyzing data from multiple parties becomes difficult. Federated learning can analyze multiple datasets while preventing the original data from being sent. However, existing federated learning frameworks are based on the Apriori property of mining frequent patterns, which has the disadvantage of low efficiency and multiple scanning datasets. Therefore, to improve mining efficiency, a federated learning framework (named FedFIM) is proposed in this paper. FedFIM collects the noisy responses sent by participants, which are used by the server to reconstruct the noisy dataset. After that, the noisy dataset is applied to the non-Apriori algorithm to mine frequent patterns. In addition, FedFIM incorporates a differential privacy-preserving mechanism into federated learning, which addresses the need for federated modeling and protects data privacy. Experiments show that FedFIM has a shorter running time and better applicability compared to the most advanced benchmark. 

 

 

 

 


登录用户可以查看和发表评论, 请前往  登录 或  注册
SCHOLAT.com 学者网
免责声明 | 关于我们 | 联系我们
联系我们: