On October 25-29, 2022, the second China Journal of Image Graphics Postgraduate Academic Forum was successfully held. The six theme forums of this forum are medical image processing, remote sensing image processing, image fusion frontier, document image processing, AR/VR frontier, and vision and learning frontier. After several days of wonderful on-site reports, interactive questions and answers, professional comments by experts, and strict review by the expert committee, the list of the first prize, second prize and excellent prize of excellent reports selected by this postgraduate academic forum is announced as follows:
(1) Excellent Report of Medical Image Processing Forum
First prize: Huang Zhongyu, Institute of Automation, Chinese Academy of Sciences, Qu Linhao, Research Center of Digital Medicine, Fudan University
Second prize: Xu Han, Wuhan University, Jin Qiuye, Fudan University, Luo Xiaoyuan, Fudan University
Excellence Award: Zhao Kun, Beijing University of Aeronautics and Astronautics, Yue Hailin, Central South University, Wu Ruoyu, Chongqing University, Xing Wenyu, Fudan University, Chang Xuebin, Xi'an Jiaotong University
link:http://www.cjig.cn/jig/ch/reader/view_news.aspx?id=20221101090315001
title:Graph-Enhanced Emotion Neural Decoding
abstract:Brain signal-based affective computing has recently drawn considerable attention due to its potential widespread applications. Most existing efforts exploit emotion similarities or brain region similarities to learn emotion representations. However, the relationships between emotions and brain regions are not explicitly incorporated into the representation learning process. Consequently, the learned representations may not be informative enough to benefit downstream tasks, e.g., emotion decoding. In this work, we propose a novel neural decoding framework, Graph-enhanced Emotion Decoding (GED), which integrates the relationships between emotions and brain regions via a bipartite graph structure into the neural decoding process. Further analysis shows that exploiting such relationships helps learn better representations, verifying the rationality and effectiveness of GED. Extensive experiments on visually evoked emotion datasets demonstrate the superiority of our model.