Most Important Contributions since 2014

 

X. Zhou,  X. Gao, J. Wang, H. Yu, Z. Wang, and Z. Chi, “Eye Tracking Data Guided Feature Selection for Image Classification,” Pattern Recognition, Vol. 63, March 2017, Pages 56-70. (PDF)

 

Brief description: The research findings reported in the paper came from a collaborative research among Soochow University, The Hong Kong Polytechnic University and the University of University, three very good universities in the world. To our best knowledge, this is the first time to use eye tracking technique in feature selection, a very important step for image classification. We propose to leverage the value of eye tracking data for image classification and introduce the concept of human-in-loop machine learning to a pattern recognition task. 

 

J. Chen, Z. Chen, Z. Chi, and H. Fu, “Facial Expression Recognition in Video with Multiple Feature Fusion,” IEEE Transactions on Affective Computing, Vol. 9, No. 1, pp. 38–50, January–March 2018. (PDF)

 

Brief description: Our work on facial expression recognition in video with multiple feature fusion was published in one of top international journals in the field of affective computing.  Video based affective computing has been a long-standing problem and is an extremely important research topic in advanced human-computer interface and robotics. In our study, both visual modalities (face images) and audio modalities (speech) are utilized. Novel feature descriptors were proposed in the paper. We applied the multiple feature fusion to tackle the video-based facial expression recognition problem with a more robust solution compared with the other state-of-the-art methods. This paper has received 33 citations already (from Web of Science Core Collection). As of September/October 2018, this highly cited paper received enough citations to place it in the top 1% of the academic field of Computer Science based on a highly cited threshold for the field and publication year.

 

J. Chen, Z. Chi, and H. Fu, "A New Framework with Multiple Tasks for Detecting and Locating Pain Events in Video," Computer Vision and Image Understanding, Vol. 155, February 2017, Pages 113-123. (PDF)

 

Brief description: Medical diagnoses and assessments are important areas where machine learning and big data can play a very important role.  This is one of a few attempts reported in literature to extend facial recognition techniques to a medical assessment talk --- detecting and locating pain events in video. It is a challenging problem for facial expression analysis due to spontaneous faces, head movements and pose variations. We explore the role of facial information at various time scales and propose a new framework for pain event detection and locating in video with a better performance compared with other state-of-the-art methods.

 

Z. Liang, B.G. Xu, Z. Chi and D. Feng, “Relative Saliency Model over Multiple Images with an Application to Yarn Surface Evaluation,” IEEE Transactions on Cybernetics, Vol. 44, No. 8, pp. 1249–58, DOI: 10.1109/TCYB.2013.2281618, August 2014. (PDF)

 

Brief description: This is a collaborative research between Department of Electronic and Information and Institute of Textiles and Clothing. It is a perfect example to show that our proposed image processing techniques can be applied to textile industry. In this paper, a pioneering study of visual attention model for comparison of multiple images is explored, and a relative saliency model of multiple images is proposed to enable relative saliency evaluation. The performance of our model was well demonstrated in a yarn surface evaluation task.

 


 

| Home| Research | Publications| Professional Activities| Consultancy|