MCLab Seminar/Machine Learning

From MCLab
Jump to: navigation, search

Contents

Emotion Recognition - General

  1. 신동일, "감정인식 기술동향, 주간기술동향," 2007. 2. 14. [1]
  2. 감성 ICT 산업 및 기술동향, ETRI, 2014. [2]
  3. Lapo Pierguidi and Stefania Righi, "Emotion Recognition and Aging: Research Perspectives, Clinical and Experimental Psychology," Clin Exp Psychol, Volume 2, Issue 2, January 2016. [3]
  4. Tom Shapiro, How Emotion-Detection Technology Will Change Marketing, October 2016. [4]
  5. http://www.affectiva.com/
  6. Chih-Wei Hsu, Chih-Chung Chang, and Chih-Jen Lin, A Practical Guide to Support Vector Classification, Technical Report, National Taiwan University. [5]
  7. C.Vinola, K.Vimaladevi, "A Survey on Human Emotion Recognition Approaches, Databases and Applications," Electronic Letters on Computer Vision and Image Analysis 14(2):24-44; 2015. [6]

Emotion Recognition via Facial Expression

for Experiment

  1. Mohammad Soleymani, Jeroen Lichtenauer, Thierry Pun, and Maja Pantic, A Multimodal Database for Affect Recognition and Implicit Tagging, IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, VOL. 3, NO. 1, JANUARY-MARCH 2012. [7]
  2. Emotion Recognition With Python, OpenCV and a Face Dataset [8]
  3. Cohn-Kanade (CK and CK+) database Download Site [9]
  4. Patrick Lucey, Jeffrey F. Cohn, Takeo Kanade , Jason Saragih , Zara Ambadar, "The Extended Cohn-Kanade Dataset (CK+): A complete dataset for action unit and emotion-specified expression," [10]
  5. Rensselear Polytechnic Institute, Intelligent Systems Lab, database Site [11]
  6. MMI database Site[12]
  7. JAFFE database Site [13]
  8. Ligang Zhang, Dian Tjondronegoro, Vinod Chandran, "Facial expression recognition experiments with data from television broadcasts and the World Wide Web," Image and Vision Computing, Volume 32, Issue 2, February 2014, Pages 107–119 [14]
  9. UT of Dallas, Face Perception Research Lab, database Download Site [15]
    • Related paper: Automatic Recognition of Facial Actions in Spontaneous Expressions [16]
    • Spontaneous Expressions(Non-posed Expressions)
  10. The Bosphorus Database, http://bosphorus.ee.boun.edu.tr/
  11. X. Zhang, et al., "BP4D-Spontaneous: a high-resolution spontaneous 3D dynamic facial expression database," Image Vis. Comput. (2014), http://dx.doi.org/10.1016/j.imavis.2014.06.002
  12. Emotime - Recognizing emotional states in faces, https://github.com/luca-m/emotime

Using Kinect

  1. Hesham A. Alabbasi, Prof.Florica Moldoveanu, Prof. Alin Moldoveanu. "Real Time Facial Emotion Recognition using Kinect V2 Sensor," IOSR Journal of Computer Engineering (IOSR-JCE) 7, Volume 17, Issue 3, Ver. II (May – Jun. 2015), PP 61-68. [17]
    • features: 17 animation units from Kinect 2
    • using NN
  2. Adam Wyrembelski, "Detection of the selected, basic emotions based on face expression using Kinect," [18]
  3. M. Puica ,”Towards a Computational Model of Emotions for Enhanced Agent Performance”, Ph.D thesis, University Politehnica of BUCHAREST, 2013. [19] (presentation)
    • Presentation 읽고나서 박사 논문 중 emotion recognition chapter 읽어볼 것.
    • 58개 face point (from Kinect)에서 18개의 distance를 feature로
    • 또는 face point 자체를 feature로 NN로 classification
  4. G R. Vineetha, C. Sreeji ,and J.Lentin ,“Face Expression Detection Using Microsoft Kinect with the Help of Artificial Neural Network”, Trends in Innovative Computing 2012 - Intelligent Systems Design.[20]
    • Puica와 비슷한 approach.
    • edge detection and token generation
  5. A.Youssef, S. F. Aly, A. Ibrahim, and A. Lynn , ”Auto-Optimized Multimodal Expression Recognition Framework Using 3D Kinect Data for ASD Therapeutic Aid”, International Journal of Modeling and Optimization, Vol. 3, No. 2, April 2013.[21]
    • 121 3D face points are aligned by Procrustes analysis (superimpose points to the reference shape)
    • SVM with scaling and/or face splitting (into upper and lower faces) shows better accuracy
    • Testing with persons not participating in training
  6. Billy Y.L., Li1 Ajmal S., Mian2 Wanquan Liu1and Aneesh Krishna1, “Using Kinect for Face Recognition Under Varying Poses, Expressions, Illumination and Disguise”,Curtin University, The University of Western Australia, Bentley, Western Australia Crawley, Western Australia.[22]
  7. Gaurav, G., Samarth, B., Mayank, V. and Richa, S.,”On RGB-D Face Recognition using Kinect”,IIIT Delhi.[23]
  8. Qi-rong MAO, Xin-yu PAN, Yong-zhao ZHAN, Xiang-jun SHEN, "Using Kinect for real-time emotion recognition via facial expressions", Front Inform Technol Electron Eng, pp. 272-282, 2015 16(4). [24]
    • 7-way 1-vs-1 real-time SVM classification using AUs and/or FPPs from Kinect (not v2)
    • suggests fusion algorithm for max confidence during 30 frames
    • shows that FPPs is more accurate than AUs, but AUs seem to be more robust for various distances
    • shows 1-vs-1 classification superior than multi-classification
  9. Zhang, Yang, Li Zhang, and M. Alamgir Hossain. "Adaptive 3D facial action intensity estimation and emotion recognition." Expert Systems with Applications 42.3 (2015): 1446-1464.[25]
    • facial AU intensity estimation by motion-based facial features and automatic feature selection based on mRMR
    • facial emotion recognition from AU intensity using adpative ensemble classifiers for each 6 basic emotions
    • detects new novel emotions
    • off-line training using Bosphorus database
    • on-line and real-time testing from Kinect 121 3D face points
  10. Safae Elhoufi, Maha Jazouli, Aicah majda, Arsalane Zarhili, Rachied Aalouane, "Automatic Recognition of Facial Expressions using Microsoft Kinect with artificial neural network", Engineering & MIS (ICEMIS), International Conference on. IEEE, 2016. [26]
  11. Chanthaphan, Nattawat, et al. "Facial emotion recognition based on facial motion stream generated by Kinect." Signal-Image Technology & Internet-Based Systems (SITIS), 2015 11th International Conference on. IEEE, 2015. [27]
    • using kinect v2 HD face API
    • adapts Zhao's Structured Streaming Skeleton (SSS) feature extraction approach and Dynamic Time Warping (DTW) distance
    • shows that the average accuracy of SSS feature outperformed the simple distance feature
  12. Z. Zhang, “Microsoft Kinect sensor and its effect,” IEEE Computer Society, vol. 19, no. 2, 2012, pp. 4-12. [28]

General

  1. Qayyum, Huma, et al. "Facial Expression Recognition Using Stationary Wavelet Transform Features." Mathematical Problems in Engineering 2017 (2017).[29]
  2. S. Berretti, B. Ben Amor, M. Daoudi, and A. del Bimbo, “3D facial expression recognition using SIFT descriptors of automatically detected key points”, The Visual Computer, vol. 27, no. 11, 2011.[30]
    • to be read
  3. G. Sandbach, S. Zafeiriou, M. Pantic, and D. Rueckert, “A dynamic approach to the recognition of 3D facial expressions and their temporal models," in Proc. 9th IEEE International Conference on Automatic Face and Gesture Recognition, March 2011. [31]
  4. Yang Zhang, Intelligent Emotion Recognition from Facial and Whole-body Expressions using Adaptive Ensemble Models, Doctoral thesis, Northumbria University, March 2015. [32]
  5. Arman Savrana, Bulent Sankura, M. Taha Bilge, "Comparative evaluation of 3D vs. 2D modality for automatic detection of facial action units," Pattern Recognition, Volume 45, Issue 2, February 2012, Pages 767–782. [33]
  6. Zhao, Xin, et al. "Structured Streaming Skeleton--A New Feature for Online Human Gesture Recognition." ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM) 11.1s (2014): 22. [34]
  7. Yongmian Zhang, Qiang Ji, "Active and Dynamic Information Fusion for Facial Expression Understanding from Image Sequences," IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 27, NO. 5, MAY 2005. [35]
  8. Shibiao XU, Guanghui MA, Weiliang MENG, Xiaopeng ZHANG, "Statistical learning based facial animation," J Zhejiang Univ-Sci C (Comput & Electron), pp. 542-550, 2013 14(7). [36]
    • 20 face expression DB for 4 emotions using Kinect
  9. Gwen Littlewort, Marian Stewart Bartlett, Ian Fasel, Joshua Susskind, Javier Movellan, "Dynamics of Facial Expression Extracted Automatically from Video," Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW’04) [37]
  10. Ismail Arı, Asli Uyar, Lale Akarun, 'Facial Feature Tracking and Expression Recognition for Sign Language,' [38]
    • multi-resolution ASM tracker on video
    • extract motion feature vector
    • SVM classifier
  11. Tutorial: Xiaojun Qi, Active Shape Model and Active Appearance Model [39]
  12. Li Zhang, Kamlesh Mistry, Siew Chin Neoh, Chee Peng Lim, 'Intelligent facial emotion recognition using moth-firefly optimization,' [40]
    • Facial contour extraction using combined LBP (LBP, LGBP, LBPV)
    • Recognizing seven expressions (happiness, fear, disgust, surprise, sadness, anger, and neutral)
  13. M. Bartlett, G. Littlewort, M. Frank, C. Lainscsek, I. Fasel, and J. Movellan. 'Automatic Recognition of Facial Actions in Spontaneous Expressions.' [41]
    • Spontaneous Expressions(Non-posed Expressions)

Books

  1. Edited by Amit Konar and Aruna Chakraborty, Emotion Recognition: A Pattern Analysis Approach, Wiley, 2015 [42]

Tutorials

  1. Machine Learning is Fun! Part 4: Modern Face Recognition with Deep Learning [43]

Emotion Recognition using Bio Sensors

  1. Amare Ketsela Tesfaye and Amrit Pandey, Empirical Evaluation of Machine Learning Algorithms based on EMG, ECG and GSR Data to Classify Emotional States, Master’s Thesis Computer Science Thesis no: MCS-2013:03, Blekinge Institute of Technology, Sweden, March 2013 , http://www.diva-portal.org/smash/get/diva2:830984/FULLTEXT01.pdf
  2. Pengchao Shangguan, Guangyuan Liu "The Emotion Recognition Based on GSR Signal by Curve Fitting ," Journal of Information & Computational Science 11:8 (2014) 2635–2646, http://www.joics.com/publishedpapers/2014_11_8_2635_2646.pdf
    • GSR 상승패턴을 curve fitting하여 coefficient를 도출하여 feature로 사용. 실험과정 참조할만 함.
  3. P. J. Lang. The emotion probe: Studies of motivation and attention. American Psychologist, 50(5) pp. 372–385, 1995
    • emotions according to arousal and valence
  4. MEASURING EMOTION: REACTIONS TO MEDIA, SHIMMER, DUBLIN, IRELAND, 2015, http://www.shimmersensing.com/assets/images/content/case-study-files/Emotional_Response_27July2015.pdf
    • GSR, HR 측정 및 데이터 분석, 단편 영화 상영하여 데이터 측정
  5. Mahdis Monajati, , Seyed Hamidreza Abbasi,, Fereidoon Shabaninia, , Sina Shamekhi, "Emotions States Recognition Based on Physiological Parameters by Employing of Fuzzy-Adaptive Resonance Theory," International Journal of Intelligence Science, 2012, 2, 166-175.
    • GSR, HR, RR 데이터로 unsupervised learning, questionaire를 통한 emotion 측정.
    • focus on negative emotion recognition
  6. Manida Swangnetr and David B. Kaber, "Emotional State Classification in Patient–Robot Interaction Using Wavelet Analysis and Statistics-Based Feature Selection," IEEE TRANSACTIONS ON HUMAN-MACHINE SYSTEMS, VOL. 43, NO. 1, JANUARY 2013, pp. 63-74
  7. Adnan Ghaderi, Javad Frounchi, Alireza Farnam, "Machine Learning-based Signal Processing Using Physiological Signals for Stress Detection," 22nd Iranian Conference on Biomedical Engineering(ICBME 2015), Iranian Research Organization for Science and Technology (IROST), Tehran, Iran, 25-27 November 2015
  8. Arturo Nakasone, Helmut Prendinger, Mitsuru Ishizuka, "Emotion Recognition from Electromyography and Skin Conductance," [44]

Books

  1. A. Shashua, Introduction to Machine Learning, 2008, http://arxiv.org/pdf/0904.3664v1.pdf
  2. T. Hastie, R. Tibshirani, J. Friedman, The Element of Statistical Learning: Data Mining, Inference, and Prediction, Springer, 2009, http://statweb.stanford.edu/~tibs/ElemStatLearn/printings/ESLII_print10.pdf
  3. Neural Networks and Deep Learning, on-line book, http://neuralnetworksanddeeplearning.com/
  4. Y. Bengio, Learning Deep Architectures for AI, 2009, http://www.iro.umontreal.ca/~bengioy/papers/ftml_book.pdf
  5. Yoshua Bengio, Ian Goodfellow and Aaron Courville, Deep Learning, An MIT Press book in preparation, http://www.deeplearningbook.org/

Deap Learning Tutorials

  1. Deep Learning Tutorials, http://deeplearning.net/tutorial/
  2. Hinton's Relevant literature, http://www.cs.toronto.edu/~hinton/deeprefs.html
  3. Yoshua Bengio, http://videolectures.net/okt09_bengio_ldhr/
  4. Using convolutional neural nets to detect facial keypoints tutorial, http://danielnouri.org/notes/2014/12/17/using-convolutional-neural-nets-to-detect-facial-keypoints-tutorial/
    using Theano
  5. TUTORIAL ON DEEP LEARNING FOR VISION, A tutorial in conjunction with the Intl. Conference in Computer Vision (CVPR) 2014, https://sites.google.com/site/deeplearningcvpr2014/

Related Sites

  1. The Center for the Study of Emotion and Attention, U. of Florida, http://csea.phhp.ufl.edu/index.html
    • IAPS 등 emtion 자극하는 media 제공
  2. LISA, http://www.iro.umontreal.ca/~lisa/twiki/bin/view.cgi/Public/WebHome
  3. http://deeplearning.net/

Software Links

  1. Theano – CPU/GPU symbolic expression compiler in python (from MILA lab at University of Montreal), http://deeplearning.net/software/theano/tutorial/
  2. Torch – provides a Matlab-like environment for state-of-the-art machine learning algorithms in lua (from Ronan Collobert, Clement Farabet and Koray Kavukcuoglu), http://www.torch.ch/
  3. Pylearn2 - Pylearn2 is a library designed to make machine learning research easy, https://github.com/lisa-lab/pylearn2
  4. Blocks - A Theano framework for training neural networks, https://github.com/mila-udem/blocks
  5. Tensorflow - TensorFlow is an open source software library for numerical computation using data flow graphs, http://www.tensorflow.org/get_started/index.html
  6. Caffe -Caffe is a deep learning framework made with expression, speed, and modularity in mind.Caffe is a deep learning framework made with expression, speed, and modularity in mind, http://caffe.berkeleyvision.org/
  7. Lasagne - Lasagne is a lightweight library to build and train neural networks in Theano, https://github.com/Lasagne/Lasagne

Readings on Deep Learning

  1. Readings on Deep Learning, http://www.iro.umontreal.ca/~lisa/twiki/bin/view.cgi/Public/ReadingOnDeepNetworks