近期团队两篇论文被国际顶刊IEEE TIP和IJIS录用
来源: 黄震华/
华南师范大学
2364
8
0
2021-12-31
近期团队两篇论文被国际顶刊录用:
 
1. "Feature Map Distillation of Thin Nets for Low-resolution Object Recognition"被IEEE Transactions on Image Processing(CCF A,中科院一区,IF:10.856) 录用.
Abstract—Intelligent video surveillance is an important computer vision application in natural environments. Since detected objects under surveillance are usually low-resolution and noisy, their accurate recognition represents a huge challenge. Knowledge distillation is an effective method to deal with it, but existing related work usually focuses on reducing the channel count of a student network, not feature map size. As a result, they cannot transfer “privilege information” hidden in feature maps of a wide and deep teacher network into a thin and shallow student one, leading to the latter’s poor performance. To address this issue, we propose a Feature Map Distillation (FMD) framework under which the feature map size of teacher and student networks is different. FMD consists of two main components: Feature Decoder Distillation (FDD) and Feature Map Consistency-enforcement (FMC). FDD reconstructs the shallow texture features of a thin student network to approximate the corresponding samples in a teacher network, which allows the high-resolution ones to directly guide the learning of the shallow features of the student network. FMC makes the size and direction of each deep feature map consistent between studentand teacher networks, which constrains each pair of feature maps to produce the same feature distribution. FDD and FMC allow a thin student network to learn rich “privilege information” in feature maps of a wide teacher network. The overall performance of FMD is verified in multiple recognition tasks by comparing it with state-of-the-art knowledge distillation methods on low resolution and noisy objects.
Keywords—Knowledge distillation, Low-resolution, Intelligent video surveillance, Internet of Things, Efficiency
 
2. "A Two-phase Knowledge Distillation Model for Graph Convolutional Network-based Recommendation "被International Journal of Intelligent Systems(CAA B,中科院一区,IF:8.709) 录用.
Abstract—Graph convolutional network (GCN)-based recommendation has recently attracted significant attention in the recommender system community. Although current studies propose various GCNs to improve recommendation performance, existing methods suffer from two main limitations. First, user-item interaction data is generally sparse in practice, highlighting these methods’ ineffectiveness in learning user and item feature representations. Second, they usually perform a dot-product operation to model and calculate user preferences on items, leading to inaccurate user preference learning. To address these limitations, this study adopts a design idea that sharply differs from existing works. Specifically, we introduce the knowledge distillation concept into GCN-based recommendation and propose a two-phase knowledge distillation model (TKDM) improving recommendation performance. In Phase I, a self-distillation method on a graph auto-encoder learns the user and item feature representations. This auto-encoder employs a simple two-layer GCN as an encoder and a fully-connected layer as a decoder. On this basis, in Phase II, a mutual-distillation method on a fully-connected layer is introduced to learn user preferences on items with triple-based Bayesian personalized ranking. Extensive experiments on three real-world datasets demonstrate that TKDM outperforms classic and state-of-the-art methods related to GCN-based recommendation problems.
Keywords— Graph convolutional network, Knowledge distillation, Recommender system, Neural network, Deep learning
 


登录用户可以查看和发表评论, 请前往  登录 或  注册
SCHOLAT.com 学者网
免责声明 | 关于我们 | 联系我们
联系我们: